- Unauthorized users have accessed Anthropic’s Mythos model through a private Discord channel since the day it was announced.
- The leak undermines Anthropic’s claim that Mythos is “too dangerous to release publicly.”
- Mozilla used Mythos to find 271 Firefox bugs — proof that the model’s capabilities are real, even if the restrictions aren’t holding.
A small group of unauthorized users have been accessing Anthropic’s Mythos AI model through a private Discord channel since the day the company announced it, according to Bloomberg. The leak raises immediate questions about how a model Anthropic described as “too dangerous to release publicly” ended up in the hands of people who were never supposed to have it.
Anthropic unveiled Mythos earlier this month as a cybersecurity-focused AI (the company has been on a roll — its valuation recently hit $380 billion following Amazon’s expanded investment) with capabilities the company argued warranted restricted deployment. The model can find software vulnerabilities at a scale that, according to Anthropic, could enable dangerous cyberattacks if released openly. The company limited access to select partners and government agencies under a program called Project Glasswing.
That restriction didn’t hold. According to Bloomberg’s source, unauthorized users obtained access on the same day Anthropic made the announcement — a gap between the company’s stated security posture and the actual control it had over its own technology.
The Mythos Marketing Problem
The timing is awkward for Anthropic, which has spent the past two weeks positioning Mythos as a proof point for responsible AI development. The company met with White House officials last Friday to discuss access for US agencies. OpenAI CEO Sam Altman, meanwhile, called the Mythos rollout “fear-based marketing” during a podcast appearance on Core Memory.
(OpenAI, for its part, has been losing key executives while Altman pivots to attacking competitors.) “It is clearly incredible marketing to say, ‘We have built a bomb, we are about to drop it on your head. We will sell you a bomb shelter for $100 million,'” Altman said. “There are people in the world who, for a long time, have wanted to keep AI in the hands of a smaller group of people. You can justify that in a lot of different ways.”
The NSA is also reportedly using Mythos despite an ongoing feud between the Pentagon and Anthropic over the model’s deployment, according to TechCrunch. The intelligence community’s interest underscores the tension between national security agencies that want the tool and a company trying to control who gets it.
Real Results, Real Leaks
Meanwhile, the model is already producing tangible results for authorized users. Mozilla used Mythos Preview to find and fix 271 bugs in Firefox, according to Wired. Firefox CTO Bobby Holley said the AI tools “have changed things dramatically” because they can now cover “the full space of vulnerability-inducing bugs” — a capability that previously required teams of human researchers spending millions of dollars.
The open-source community has also taken notice. A project called OpenMythos appeared on GitHub last week, claiming to be a PyTorch reconstruction of Mythos where 770 million parameters allegedly match a 1.3 billion parameter transformer, according to MarkTechPost. Anthropic has never published a technical paper on Mythos, making any reconstruction speculative — but the project’s existence shows the community isn’t waiting for permission.
For Anthropic, the unauthorized access is a credibility problem. The company’s argument for restricting Mythos was straightforward — Anthropic has been aggressively defending Claude’s desktop moat against Google and OpenAI: the model is too capable to let everyone have it. That argument weakens when “everyone” includes a Discord server the company didn’t know about.

