TL;DR

  • Anthropic’s CMS leaked a draft post revealing “Claude Mythos” — a new AI tier above Opus, codenamed Capybara.
  • The model is described as “far ahead of any other AI in cyber capabilities” — and Anthropic is worried about what that means.
  • Cybersecurity stocks fell hard on the news: CrowdStrike -7%, Palo Alto -6%, Zscaler -4.5%.

Anthropic didn’t plan to tell anyone about Claude Mythos. A CMS misconfiguration did it for them.

A security researcher stumbled on a publicly accessible data store tied to Anthropic’s blog last week, containing close to 3,000 unpublished assets — including a draft blog post announcing the company’s most powerful AI model to date. Fortune broke the story Thursday evening.

Anthropic confirmed the leak, blaming “human error” in its content management system. The CMS apparently sets assets to public by default — meaning anyone who knew where to look could read internal drafts meant to stay private.

What they found was explosive.

What the Claude Mythos Anthropic Leak Actually Reveals

The document introduces two names for the same underlying model: “Claude Mythos” is the model name; “Capybara” is the new product tier it belongs to. Capybara slots above Opus — Anthropic’s current most powerful offering — creating a fourth tier in the lineup, sitting above Haiku, Sonnet, and Opus.

“Compared to our previous best model, Claude Opus 4.6, Capybara gets dramatically higher scores on tests of software coding, academic reasoning, and cybersecurity,” the draft blog said.

Anthropic confirmed the model in a statement to Fortune. “We consider this model a step change and the most capable we’ve built to date,” a spokesperson said. It’s currently in early access with select customers — not publicly available, and described as expensive to run.

The Cybersecurity Warning Hidden in the Leak

Here’s where it gets uncomfortable. The draft post doesn’t just hype the model — it warns about it.

Anthropic described Claude Mythos as “currently far ahead of any other AI model in cyber capabilities” and said it “presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders.”

In plain terms: the company is worried its own model is too good at hacking.

The rollout strategy reflects that anxiety. Instead of a broad release, Anthropic is prioritizing cyber defenders — giving them a head start against what it sees as an incoming wave of AI-powered exploits. It’s not a charitable move; it’s a controlled detonation.

This isn’t entirely new territory. Anthropic has already dealt with real-world misuse: a Chinese state-sponsored group was caught running a coordinated campaign using Claude Code to infiltrate around 30 organizations — tech firms, financial institutions, government agencies — before Anthropic detected and shut it down.

The company has also been building safety infrastructure to guard against dual-use risks, including hiring weapons policy specialists to prevent AI misuse.

Cybersecurity Stocks Plunge on Claude Mythos News

Cybersecurity stocks fell sharply on Friday. CrowdStrike dropped 7%, Palo Alto Networks fell 6%, Zscaler lost 4.5%, and Okta, SentinelOne, and Fortinet each shed around 3%.

The fear is straightforward: if AI can autonomously find and fix vulnerabilities, demand for traditional security vendors could erode. Why pay for a team of analysts when a model does it faster and cheaper?

It’s a similar dynamic to February, when Anthropic launched Claude Code Security and stocks briefly tanked before recovering. Whether this dip sticks depends on how real the Mythos threat turns out to be.

What Else Was in the Data Cache

Beyond the model announcement, the exposed store also contained details of an invite-only CEO retreat — an 18th-century English manor, two days, Dario Amodei, and Europe’s most influential business leaders set to experience “unreleased Claude capabilities.”

Given what just leaked, that summit is going to be a very interesting conversation.

Anthropic has had a turbulent few weeks. The company recently won a court order blocking the Trump administration’s attempt to cut off its Pentagon contracts after refusing to remove its safety guardrails.

Claude Mythos will arrive eventually — Anthropic says it’s just being deliberate about how. But one thing’s already clear: someone really should have changed that default CMS setting.

Leave your vote