- The White House is preparing to grant US federal agencies access to Claude Mythos—the AI model Anthropic itself refused to release publicly over cybersecurity fears.
- Some agencies have already been quietly testing the system, sidestepping an earlier Trump administration restriction on Anthropic tools.
- The move mirrors OpenAI’s ‘trusted access’ model for GPT-5.4-Cyber, signaling that the US government is now the gatekeeper for the most dangerous AI systems on Earth.
The US government is about to get access to the AI model its own developer deemed too risky to ship. The White House is preparing a plan to allow federal agencies to use Claude Mythos—Anthropic’s most powerful system, which the company deliberately locked away from the public over concerns it could autonomously execute sophisticated cyberattacks, Bloomberg first reported Thursday.
Reuters confirmed the story, adding that the arrangement would involve a modified version of Mythos with additional guardrails tailored for government use. The details of which agencies would receive access—and exactly what for—remain murky. But cybersecurity and defense applications are the obvious candidates.
The timing is hard to ignore. Anthropic released Claude Opus 4.7 just hours earlier—the company’s new publicly available flagship, which it explicitly described as “less broadly capable” than Mythos. In other words, Anthropic shipped a nerfed version for everyone else while the White House quietly arranged VIP access to the real thing.
Why the Federal Government Wants the AI Model Nobody Else Can Have
Claude Mythos isn’t just another chatbot. The UK AI Security Institute found that the model could autonomously execute 32-step corporate network attacks—meaning it could chain together dozens of intrusion techniques without human guidance. Anthropic’s own internal testing flagged capabilities that made the company uncomfortable releasing it through normal channels.
That discomfort led to an unusual move: Anthropic locked Mythos behind restricted access and built a new model, Opus 4.7, specifically to test whether its cyber safeguards could hold up at scale. The logic was straightforward—deploy the weaker model first, observe how people try to break it, patch the vulnerabilities, then consider releasing Mythos-class capabilities later.
But the government apparently doesn’t want to wait. Politico reported that some federal agencies had already been quietly testing Mythos, skirting an earlier Trump administration directive that restricted Anthropic tools across the federal workforce. The new White House plan would formalize that access—though likely with a version of Mythos modified to reduce the risk of misuse.
This follows the template OpenAI established with GPT-5.4-Cyber, its restricted cybersecurity model available only to vetted defenders. The pattern is becoming clear: the most capable AI systems aren’t going to the public or even to most enterprises. They’re going to the government, under controlled conditions, for national security purposes.
For Anthropic, this is both validation and risk. The company spent months arguing that Mythos-class capabilities needed careful stewardship—and now that stewardship includes handing the keys to an administration that, until recently, had banned its products from federal use. Whether that’s a testament to Mythos’s value or a sign that the safety debate has already been overtaken by geopolitics depends on who you ask.
As of Thursday evening, neither Anthropic nor the White House had released details on the specific agencies, use cases, or safeguards involved. Bloomberg reported the deal was still being finalized.
