European Regulators Left Out of Anthropic’s Claude Mythos Preview

European Regulators Left Out of Anthropic's Claude Mythos Preview

I watched a clip of British officials studying a report with the same quiet alarm you’d see when someone points out a loose wire under a table. You could feel the shift: curiosity curdling into concern. If you were sitting in a European cyber office that day, you probably felt like you’d been handed a plate of spaghetti and told it was something else.

At a London briefing, officials leaned in and exchanged notes. Why Anthropic handed Mythos previews to some, but not all

I’ll be blunt: you’re watching a diplomatic and security soap play out around access. Anthropic’s Claude Mythos Preview—an unreleased model—was shown to U.S. firms and selected governments, and the U.K.’s AI Security Institute says it has had sufficient access to test the model in some capacity, according to the U.K. AI minister Kanishka Narayan.

That kind of selective viewing breeds two things: inside knowledge for a few and mounting suspicion for everyone else. You know how it feels when a trick is performed behind a velvet curtain? The room with the curtain is small, and the rest of the house wants a peek.

Why were European agencies left out of Anthropic Mythos access?

Politico reached out to eight continental agencies. Their mood ranged from mildly miffed to plainly frustrated. The Dutch agency, via spokesperson Job Holzhauer, told reporters the true impact of discovered vulnerabilities is “difficult to verify without technical details.” Germany had “entered into conversations” but had not been allowed to test the model. The message is simple: access was uneven.

On a conference call from The Hague, a spokesperson paused before answering. What officials told Politico — and what that silence means

You should know this: when a national agency says it can’t verify an impact without details, that’s not hedging; it’s a signal. Without model access, you can only speculate about attack surfaces and exploitation paths. That uncertainty sharpens risk calculus across networks and supply chains.

Anthropic’s Mythos preview reportedly displayed super-hacking capabilities. U.S. tech giants and governments got a demonstration. The U.K. acted on findings. Continental agencies were left asking for the same test kit and were told to wait, or refused technical peeks outright.

What is Claude Mythos Preview and is it dangerous?

The short answer: it’s an unreleased AI model reportedly capable of creative exploits and unexpected behavior when probed. Officials in the U.K. say their testing turned up results worth action. If you run critical infrastructure or secure data, you should treat that claim as a red flag until someone shows the test logs or patches the issue.

In Berlin, conversations opened but the hands-on test didn’t happen. How legal lines and access rules are shaping the scramble

I saw a memo from a European office that read like a standoff: legal teams on one side, engineers on the other, and a vendor choosing who gets the demo. Laura Caroli, an AI researcher quoted by Politico, suggested the EU could be sidelined because Mythos remains unreleased; once a model is public, EU rules would apply.

That point matters. If Anthropic made Mythos a market product, obligations under EU law could compel disclosures, safety assessments, or incident reporting. Right now, the model sits in a gray zone where corporate discretion determines who tests and who waits.

Can EU regulators force access to unreleased AI models?

Not easily. You can pressure, you can negotiate, and you can legislate for future releases — but retroactive access to a private preview is a different fight. The EU’s leverage increases once a product crosses the market threshold; until then, the vendor sets the terms.

Let me be clear: this isn’t just about diplomatic bruising. It’s about operational safety. When only a handful of parties examine a powerful system, hidden failure modes can persist. You end up with security decisions made from filtered glimpses instead of full technical audits.

Anthropic, Apple, and Microsoft sit at the center of this, and governments have to choose whether to accept selective access or demand broader scrutiny. You should want regulators that can probe, test, and require fixes. Otherwise, you’re the agency staring at a locked safe with a tiny keyhole and no key.

So where does that leave you, a reader who cares about digital risk and public oversight? Watch who gets the demo, who gets the report, and who gets to act. And ask: when an AI behaves like a supercharged mystery, who do we trust to pry the lid off and show us what’s inside?