.avif)
.avif)
Rethinking Cybersecurity in the Age of AI

A Conversation with Julius Muth, CEO of Revel8
As AI-powered attacks move from theory to everyday reality, the assumptions we’ve relied on in cybersecurity are starting to crack. Deepfakes, synthetic identities, and multi‑channel social engineering aren’t edge cases anymore, they’re quickly becoming the norm.
To unpack what this shift really means, Sif, our Head of Community and Growth, spoke with Julius Muth, Founder of Revel8. He shared detailed insights into the future of AI‑driven threats and cybersecurity.
Sif: What assumptions in traditional cybersecurity threat models break once attackers can iterate with AI at machine speed?
Julius: The biggest assumption that breaks is scarcity: of time, creativity, and credibility. Traditional models assume attackers are constrained in how often they can try, how tailored an attack can be, and how convincing it can sound or look. AI removes all three. Attacks can be continuously tested, refined, and personalized in near real time, using cloned voices, synthetic faces, and accurate internal context. Static controls, annual training cycles, and signature-based detection were built for episodic, low-fidelity attacks, not for adaptive adversaries that can generate believable human interactions faster than defenders can update rules.
Most teams assume attacks are generic, channel-specific, and visibly “malicious.” But modern attacks blend into real workflows, span multiple channels, and exploit trust rather than technical flaws. Tools optimized for blocking bad content struggle when the content is contextually correct and socially engineered. And in the near future, when realism and multi-channel coordination become the default rather than the exception, attack volumes will surge dramatically. At that point, traditional defenses that rely on filters and human review will no longer scale economically, forcing organizations to invest in far more complex, expensive behavior-based detection systems. The attacks that are cheap to stop today will become prohibitively expensive to defend against tomorrow.
How do you defend systems when human verification is no longer trustworthy?
Julius: Identity becomes unreliable the moment familiarity can be convincingly synthesized at scale. If a voice, face, writing style, and timing all match expectations, humans stop verifying. At that point, “who is this?” is no longer a sufficient security question, especially in high-trust internal environments. In order to defend systems when human verification is no longer trustworthy, you have to stop treating humans as a last line of defense and start treating them as a "trainable surface." Defense has to focus on conditioning responses under realistic pressure, exposing people to believable attacks before real attackers do. The goal isn’t perfect detection; it’s resilient behavior when trust is manipulated.
What’s the most dangerous security assumption AI teams are making today?
AI teams are doubling down on detection, prevention, and response tooling, assuming that if systems get smarter, humans matter less. But attackers are automating persuasion, not just execution. They bypass technical controls by operating inside trust, context, and human workflows. Without actively training how people behave under realistic attack conditions, automation creates confidence, not resilience. Ignoring the human learning loop is how organizations fall behind.
Another dangerous risk is when automation becomes predictable for an attacker, it creates new vulnerabilities. Auto-approvals, silent remediations, and opaque AI decisions can hide early warning signals that human judgment would catch. Over-automation dulls the intuition and situational awareness defenders need most, exactly when attacks are becoming more sophisticated.
What defensive capabilities feel inevitable, and which are wishful thinking?
Inevitable: personalized, continuous attack simulation; risk profiling at the individual level; and training that mirrors real attacker tactics across channels. Wishful thinking: universal deepfake detection, perfect identity verification, and AI systems that “solve” social engineering on their own.
Huge thanks to Julius for sitting down with Sif, and for the insights. We are excited to see the frontiers Revel8 will take cybersecurity to!
Revel8’s mission is to help organizations build lasting human resilience against AI-driven threats—by simulating real attacks, measuring behavioral risk, and turning awareness into something evidence-based and actionable.
Become a part of the AI Campus.
There are many ways to join our community. Sign up to our newsletter below, or select one of the other two options and get in touch with us:

.avif)
.avif)
.avif)