Anthropic’s Mythos is a warning about AI power, but not the one Silicon Valley wants you to hear
What is Anthropic?
Anthropic is one of the leading frontier AI firms and the maker of Claude. It was founded by former OpenAI researchers and has spent much of the past two years presenting itself as the safety conscious rival in the race to build more powerful models. Its latest restricted model, Claude Mythos Preview, is being positioned as its most capable system yet for coding, agentic work, and cyber security related tasks.
What is Mythos?
Claude Mythos Preview is not being marketed as a normal consumer chatbot release. Anthropic says it is a general purpose frontier model whose broad software understanding makes it unusually strong at discovering, reproducing, and fixing vulnerabilities. Rather than open public release, Anthropic placed it inside Project Glasswing, a gated programme that gives selected companies and open source defenders early access for defensive security work.
Why are people alarmed?
The alarm is not imaginary. Anthropic says Mythos has already identified thousands of zero day vulnerabilities across critical infrastructure, and the UK AI Security Institute says the model shows continued improvement in capture the flag tasks and significant improvement on one multi step cyber attack simulation. But the public evidence so far supports a narrower conclusion than the loudest headlines: Mythos looks stronger, not yet plainly magical.
Anthropic’s Mythos matters because it reveals where frontier AI is really heading: away from novelty chat and toward contested control over software, infrastructure, and cyber power. That does not mean every apocalyptic headline is right. It means the underlying direction is real, even if the marketing has run ahead of the proof.
For a reader coming fresh to this story, the first thing to understand is that Mythos is not being sold as a mass product. Anthropic announced it through Project Glasswing, a programme designed to put the model into the hands of selected defenders, including major technology and infrastructure organisations, before any broader release is contemplated. Anthropic says launch partners include companies such as AWS, Apple, Cisco, CrowdStrike, Google, JPMorganChase, Microsoft, Nvidia, Palo Alto Networks, and the Linux Foundation. The company’s claim is straightforward: if models are becoming good enough to find and exploit serious vulnerabilities at scale, defenders need early access before attackers catch up.
That is the serious part of the story. The less serious part is the surrounding theatre. The internet very quickly turned Mythos into a symbol of superintelligence arriving early, as though Anthropic had unveiled a machine that had suddenly crossed from coding assistant into autonomous cyber predator. That is too simple. The public independent evidence does not yet show a clean rupture of that kind. What it shows is something more disciplined, and in some ways more unsettling: a steady escalation in cyber capability that is beginning to matter at institutional scale.
Anthropic’s own case is strong enough without embellishment. The company says Mythos is its most capable model yet, especially for coding and agentic tasks, and that this broader competence is exactly why its cyber security performance is so striking. On Anthropic’s account, Mythos has already identified thousands of zero day vulnerabilities across critical infrastructure. The company’s red team write up says the model performs strongly across the board but is strikingly capable at computer security tasks, which is why Anthropic launched Project Glasswing rather than releasing it in the usual way.
The UK AI Security Institute’s evaluation broadly supports the view that Mythos is a meaningful step up, but not the view that it represents a sudden jump into a wholly different category. AISI says Mythos shows continued improvement in capture the flag challenges and significant improvement on multi step cyber attack simulations. On expert level CTF tasks, the institute says Mythos succeeds 73 percent of the time. In its 32 step simulated corporate network attack, “The Last Ones,” Mythos was the first model to solve the full chain from start to finish, doing so in 3 out of 10 attempts, with an average of 22 steps completed compared with 16 for Claude Opus 4.6.
What the public evidence really shows
The best public reading at the moment is this: Mythos appears better than earlier models on cyber tasks and clearly strong enough to worry defenders. But the evidence published so far does not prove that a completely new threshold has been crossed in the dramatic way some commentary implied. AISI’s own language is “continued improvement” and “significant improvement,” not civilisation ending discontinuity.
That is a serious result. It is also not the same as saying that Mythos can now shred any real enterprise system it touches. AISI is explicit about the limits. Its ranges were easier than real world environments. They lacked features often present in real systems, including active defenders and defensive tooling, and there were no penalties for actions that would trigger security alerts. The institute therefore says it cannot conclude that Mythos would be able to attack well defended systems. That caveat matters. It is the difference between a real warning and a marketing myth.
This is where the debate becomes more interesting. Anthropic is not wrong to say that AI cyber capability is becoming a policy problem. It plainly is. But the company has also chosen to present Mythos through the language of gated urgency and controlled danger. That was a strategic decision. It pushed the conversation toward fear, scarcity, and state level relevance. It also had the useful side effect of making Mythos sound like the model too dangerous for ordinary users, which is a powerful story in a market now crowded with claims of general intelligence.
There is a harder question hiding beneath that strategy. If Mythos is Anthropic’s flagship frontier model, what exactly is the headline claim the company most wants the world to hear? Right now it is not that Mythos can run an economy, replace a profession, or solve general reasoning at some new level. It is that Mythos is particularly effective at cyber security related tasks. That is not trivial. It is important. But it is also narrower than the grander rhetoric that has often surrounded frontier AI. The reader should notice that gap. The cyber story may be real, yet still reveal the current limits of the broader vision.
The geopolitical dimension is real as well, though again it needs discipline. Anthropic says it has been in ongoing discussions with US government officials about Mythos and its offensive and defensive cyber capabilities. Major governments, banks, and infrastructure operators are now treating frontier cyber models as tools that may need controlled access and defensive preparation rather than ordinary consumer release. Whether or not Mythos is overhyped as a singular leap, it is already being treated as a matter for states and critical systems rather than merely app developers and consumers. That is the real threshold that has been crossed.
The danger in all this is not just technical. It is interpretive. AI companies now have an obvious incentive to frame every new frontier release as either salvation or emergency. Both stories produce attention. Both stories support valuation. Both stories pressure governments and customers into taking the company’s preferred narrative seriously. The responsible position is not to dismiss Mythos, but to strip away the dramaturgy and ask what has actually been shown. On that standard, the answer is clear enough. Mythos is not nothing. Mythos is not fantasy. Mythos is also not yet public proof of a science fiction break with the past.
Why this matters now
The practical issue is cumulative pressure. A model does not need to become omnipotent overnight to change the balance between attackers and defenders. If frontier systems keep getting somewhat better at discovering vulnerabilities, chaining attack steps, and operating autonomously inside permissive environments, then the burden on software maintainers, infrastructure providers, and regulators keeps rising. The threat may be evolutionary rather than cinematic, but it is still a threat.
That is why the right conclusion is neither panic nor complacency. Anthropic’s critics are right to resist the theatrical version of the Mythos story. The company’s defenders are right that cyber capability is improving fast enough to demand serious preparation. What matters is not whether Mythos is the single model that changes everything. What matters is that Mythos makes the direction of travel harder to deny. Frontier AI is moving into the layer of systems that states, banks, cloud providers, and open source maintainers depend on.
Once that happens, the argument is no longer about chatbots. It is about who gets to probe, patch, defend, and eventually dominate the software foundations of the modern world.
You might also like to read on Telegraph.com
A thematic reading map of Telegraph.com AI coverage, grouped around the main fault lines shaping the field: agents, infrastructure, economics, labour, governance, and the geopolitical split.
Agents, autonomy, and the shift from chat to action
- The Age of the AI Operator: Why 2026 Marks the Shift From Chatbots to Autonomous Agents
- The Breakthrough Was Not the Model. It Was the Loop.
- The First Non Human Economy Is Being Built by AI
- The Jarvis Layer: Why the Most Dangerous AI Is Not the Smartest One, but the One Closest to You
- OpenClaw, Moltbook, and the Legal Vacuum at the Heart of Agentic AI
Compute, power, and the hard physical limits of the boom
- AI’s next moat is no longer scale alone
- The Compute Detente: Why Big Tech Is Buying Everyone and Why It Will Not Last
- AI Driven Data Centre Growth Is Colliding with Transformer Shortages and Raising the Risk of Prolonged Electricity Rationing in Britain
- Elon Musk Moves xAI Into SpaceX as Power Becomes the Binding Constraint on Artificial Intelligence
- The Cambrian Explosion of Robots Is Real and Most Will Die
Markets, business models, and who captures the gains
- AI Is Raising Productivity. Britain’s Economy Is Absorbing the Gains
- AI Is Raising Productivity. That Is Not the Same Thing as Raising Prosperity
- The Consulting Pyramid Is Breaking and McKinsey Just Admitted It
- The End of Rented Software: How Artificial Intelligence Breaks the Subscription Model
- Why Artificial Intelligence Is Breaking GDP and What Comes After
Labour, education, and the social shock
- AI Is Reordering the Labour Market Faster Than Education Can Adapt
- India’s AI Reckoning: When Intelligence Becomes Cheaper Than Labour
- Sadiq Khan Warns of Mass Unemployment. AI Poses a Deeper Threat to London
- AI Will Not Just Take Jobs. It Will Break Identities
- From Lecture Hall to Algorithm: How AI Is Rewriting Authority
Governance, safety, and the fight over control
- The AI Safety Race Has Collapsed as Companies Admit They Cannot Afford to Slow Down
- Why the Fight Over Defining AGI Is the Real AI Risk
- Why Treating AI as a Friend or Confidant Is a Dangerous Mistake and How It Can Lead, in the Worst Cases, to Suicide
- The Human Side Of Using A Very Large Machine
- The Quiet AI Revolution No One Noticed Until It Was Everywhere
China, the state, and the geopolitical split in AI
- China’s AI Governance Model vs America’s Frontier Race: Why the Real Battle Is Over Who Can Control Intelligence at Scale
- China Is Not Trying to Beat Western AI. It Is Trying to Replace the Interface
- Why AI Is Forcing Big Pharma to Turn to China
- China’s Open AI Models Could Puncture the Artificial Intelligence Bubble
- Beijing Writes the AI Rules While Washington Writes Press Releases

