- Financial institutions are adopting AI at more than twice the rate of their supervisors, with only 2 in 10 regulators reporting “advanced AI adoption.”
- Only 24% of authorities collect data on industry AI adoption—and 43% have no plans to start within the next two years.
- Anthropic’s Mythos, which found 271 vulnerabilities in Firefox 150, has turned the oversight gap into an urgent national security question.
When Anthropic’s Mythos model found 271 vulnerabilities in Mozilla’s Firefox 150 last month, it was the latest reminder that AI systems can uncover flaws faster than any human team. What regulators are just starting to grapple with is what that means for the financial system they’re supposed to supervise.
Financial institutions are adopting AI at more than twice the rate of their supervisors, according to a sweeping new survey published Tuesday by the Cambridge Centre for Alternative Finance. The research—prepared alongside the Bank for International Settlements, the IMF, and other multilateral institutions—found that only 20% of regulatory authorities reported “advanced AI adoption” in their own operations. Just 24% of the 130 central banks and financial authorities surveyed said they collect any data on how banks are using AI. Another 43% have no plans to start.
“This empirical blind spot may undermine the prevailing optimism [on AI]. Authorities cannot successfully harness or oversee AI if they are navigating its adoption and risks without hard data,” the report’s authors wrote. The survey drew responses from 350 financial institutions and fintechs, 140 AI vendors, and regulators across 151 countries.
Why Mythos Has Regulators Especially Worried
Anthropic released Mythos earlier this month—and immediately decided not to make it publicly available. The reason, as the company explained, is that Mythos “turns computers into crime scenes.” The model can identify and exploit software vulnerabilities at a pace that makes traditional security research look quaint. When Mozilla gave Mythos Preview early access, it found and patched 271 vulnerabilities in Firefox 150 within days. That’s more zero-days than most security teams discover in a year. For context on what Mythos actually is and why Anthropic kept it under wraps, see Frontierbeat’s deep dive on the model.
The Cambridge report highlights Mythos as a leading example of systems that could soon exploit software vulnerabilities at scale, potentially outpacing existing governance mechanisms. “Regulators globally have engaged with banks over how prepared their legacy systems are for emerging frontier AI models,” Reuters reported. But engagement and oversight are different things—especially when most regulators don’t have the data to know what’s actually running in production.
German banks were among the first to formally examine Mythos with their regulators, according to a separate Reuters story. The Federal Reserve and Treasury also summoned Wall Street CEOs to discuss the model earlier this month. But summoning executives for a conversation isn’t the same as having the technical capacity to audit how banks are deploying AI internally—which most regulators currently lack.
The problem isn’t that regulators are unaware of AI risks. The Bank for International Settlements has published multiple reports on AI systemic risk. The IMF has published warnings. The research is there. What’s missing is the operational capacity to act on it. Regulators are essentially trying to supervise a high-speed trading floor while still filling out paper forms.
For banks, that’s convenient. For everyone else, it’s a problem. The faster financial institutions adopt AI for credit decisions, fraud detection, and risk management, the harder it becomes for supervisors to catch model failures before they cascade. The Mythos disclosure didn’t create this gap—it just made it impossible to ignore.
The Cambridge Centre for Alternative Finance survey report is available via the Cambridge Judge Business School. Anthropic has not indicated when or whether Mythos might see a wider release.
