• Bessent and Powell summoned Wall Street bank CEOs for an emergency meeting on AI cybersecurity threats.
  • Anthropic’s Mythos model uncovered thousands of software vulnerabilities, some sitting undetected for 27 years.
  • The model is so capable that Anthropic restricted its release — a first for the company.

U.S. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell summoned the heads of America’s biggest banks to a closed-door meeting in Washington this week. The agenda item: what to do about Claude Mythos, an AI model Anthropic itself says is too dangerous to release widely.

Goldman Sachs CEO David Solomon, Bank of America’s Brian Moynihan, Citigroup’s Jane Fraser, Morgan Stanley’s Ted Pick, and Wells Fargo’s Charlie Scharf all attended the Tuesday meeting at Treasury headquarters. Bloomberg first reported the details. JP Morgan’s Jamie Dimon was invited but couldn’t make it — though he’d already warned shareholders that cybersecurity “remains one of our biggest risks” and that “AI will almost surely make this risk worse.”

The meeting was reportedly tacked onto a lobby group gathering the bank bosses were already attending in Washington. Regulators focused the invite list on so-called systemically important banks — the ones whose failure would threaten the entire financial system.

Why Mythos Has Wall Street Spooked

Mythos isn’t even publicly available yet. Anthropic has described it as its most capable cybersecurity model, one that exposed thousands of vulnerabilities in software and popular applications during internal testing. The oldest of those flaws had been sitting undetected for up to 27 years.

That discovery spooked Anthropic enough to do something unprecedented: restrict the model’s release to a handful of organizations. Amazon, Apple, Microsoft, Cisco, Broadcom, and the Linux Foundation have access. This is the first time the company has limited access to one of its products.

Anthropic published a blog post at the beginning of the month warning that AI models had surpassed “all but the most skilled humans at finding and exploiting software vulnerabilities.” The company added that “the fallout — for economies, public safety, and national security — could be severe.”

The timing is uncomfortable for Anthropic. The company is already fighting the U.S. government’s designation of it as a supply chain risk — an unprecedented Pentagon action that limits its use inside federal agencies. A judge temporarily blocked the designation, but an appeals court has since let parts of it stand.

For banks, the equation is straightforward. Financial institutions are both heavy users of AI and prime targets for cyberattacks. A model that can find vulnerabilities faster than any human team is simultaneously the best defensive tool money can buy and the most terrifying weapon an attacker could deploy. The offensive capabilities of AI in cybersecurity have been doubling roughly every six months.

A 2026 survey of 252 financial institution leaders by CSI found that AI has become the banking sector’s greatest source of both optimism and anxiety. More respondents flagged it as a top risk than flagged it as a top opportunity, which is a first.

The Federal Reserve, Anthropic, and the invited banks all declined to comment. The Treasury didn’t respond to Bloomberg’s request either. The meeting happened behind closed doors, and the public details come from people familiar with the matter — which is Washington-speak for “someone in the room talked.”

What’s clear is that Mythos has crossed a threshold. When the U.S. government and the country’s most powerful financial institutions start emergency meetings about a product that hasn’t been released yet, the conversation has moved well beyond the usual AI hype cycle. The question isn’t whether AI can break things faster than humans can fix them. It’s what happens when it already has.

Leave your vote