Back to Insights
April 15, 20269 min readJEM Labs

End of détente: Why detecting harder isn't enough in the Mythos era

AI changes vulnerability discovery economics, but the deeper issue is deferred trust and long-lived sessions that are rarely re-verified.

AI SecurityArchitectureIdentityThought Leadership

A Longstanding Equilibrium Is Breaking

Today's New York Times features an insightful opinion piece by Raffi Krikorian, Mozilla's CTO. In it he postulates a decades-long detente that has held the internet together: writing software was hard. Finding bugs was hard. The difficulty on both sides created an uneasy equilibrium. It was never ideal, but it held.

By most accounts, Mythos and the class of models it foreshadows shatter it.

Not gradually. As Krikorian recounts, Anthropic's previous model could weaponize a known Firefox vulnerability only a few times in hundreds of attempts. Mythos reportedly succeeded nearly every time. It also found a 27-year-old OpenBSD vulnerability in code that automated security tools had examined millions of times without surfacing it.

Krikorian's concern, one of equity, is doubtless well placed: who gets access to Mythos-class defensive capability, and who gets left behind? The volunteer open-source maintainer. The small business owner who shipped a first app with AI assistance last week. The clinician who built a patient portal without a security team.

That access gap matters. But underneath it is a deeper problem - one that affects even the best-resourced organizations inside Project Glasswing.

Detection-Centric Security Has a Structural Limit

The market reaction hinted at it. CrowdStrike and Palo Alto Networks were down on the year heading into last week. Both recovered after Glasswing inclusion was confirmed - Wedbush called them "an AI enforcement layer, not an AI casualty."

Perhaps.

But that recovery reflected relief that Anthropic chose them as partners, not confidence that their underlying architecture holds against Mythos-class threats. That distinction matters.

Almost everything in enterprise security today - EDR, SIEM, threat intelligence, zero-trust network access - rests on a shared assumption: attacks are detectable. Patterns exist. With good enough tooling, you can identify the anomaly and respond in time.

That model still has value. But it has a structural weakness.

If an AI can autonomously discover and exploit zero-day vulnerabilities - flaws that are unknown, unclassified, and have never triggered an alert - then there is no signature to match, no stable baseline to deviate from, and often no meaningful response window. The exploit is complete before the framework has a category for it. The tools are not failing. The assumption beneath them is.

The incumbent response will be predictable: better AI-powered detection, faster triage, more automated response. And in fairness, all of that helps. But this is still an arms race - and if AI collapses the cost of vulnerability discovery, the attacker's economics improve dramatically while yours do not.

You cannot win an arms race when the other side's cost structure improves faster than yours.

The Session Trust Decision We Keep Avoiding

There is a second problem, less discussed, and I think more fundamental.

The industry made a choice, years ago, that looks increasingly difficult to defend. Authenticate once. Issue a broadly scoped, long-lived token. Persist that trust across sessions, devices, networks, and months. Do not burden the user with repeated verification - it is bad UX.

For example, I have spent the past week traveling across multiple countries, moving between airport wifi, satellite wifi, and hotel networks, sometimes over a VPN, other times without, on two continents. None of the myriad cloud services I use every day once asked me to re-authenticate. These include the largest email, meeting, messaging, and other services on the planet, as well as smaller privacy- and security-focused services. I had to erase the cache and cookies to force re-authentication.

Maybe their risk signals correctly identified me throughout. Maybe a careful attacker - one who profiled my behavior before touching my session, who knew what my normal patterns looked like and replicated them - would have gotten the same free pass. I cannot tell from the outside. Neither can you. To me, it looked like hope that I continued to have control of my device - and little else.

That opacity is itself the problem.

Google's risk signals are designed to catch anomalous behavior. The question Mythos raises is whether an AI attacker will bother to look anomalous.

This is the vulnerability that no detection system cleanly reaches. A stolen session token, used carefully, does not look like an attack. It looks like the legitimate user. There is no alert, because nothing has deviated. The attacker is not breaking in - they are walking through a door that was left open weeks ago.

The industry convinced itself this was an acceptable tradeoff because the alternative - frequent re-authorization - meant friction. Passwords are annoying. OTP codes break flow. So we stopped asking. We built an architecture of deferred trust and called it user experience.

Mythos does not create this vulnerability. It makes it dramatically cheaper to exploit.

What to Ask Instead

The right question, then, is not just: how do I detect this faster?

It is: how recently did my application actually verify that the right human is present?

These more consequential questions point elsewhere.

I will have more to say in a follow-up, including what an architecture looks like that makes frequent presence verification nearly frictionless, without sacrificing the platform diversity that enterprise deployments actually require.

But the first step is accepting that the tradeoff was always a choice, not an inevitability.

Krikorian is right that protection has to extend beyond organizations with resources and Glasswing access. That is a distribution problem, and it is urgent.

This is an architecture problem. And it was urgent before Mythos made it visible.

This is an architecture problem. And it was urgent before Mythos made it visible.


We at JEM Labs have spent several years working on hardware-backed security architecture. Glad to compare notes with anyone thinking seriously about this.