• OpenAI published a five-pillar cyber defense action plan on Tuesday.
  • The plan calls for widening access to its most advanced AI models across government agencies.
  • The irony: the U.S. government’s own cyber agency still can’t get access to those same models.

OpenAI published a five-pillar action plan Tuesday for deploying advanced AI in cyber defense roles across federal and state government, calling it a blueprint for the “Intelligence Age.” The plan covers democratizing AI-powered defense tools, coordinating between government and industry, securing frontier AI capabilities, maintaining deployment visibility, and helping end users defend themselves. A 14-page PDF with the full proposals is available on OpenAI’s website.

The timing is notable. Just two days before the plan dropped, CNN reported that OpenAI is expanding access to its most advanced models to help businesses and governments shore up their cyber defenses—a direct contrast to Anthropic, which has taken a far more restrictive stance on its cyber-capable model Mythos. OpenAI and Anthropic also quietly briefed House Homeland Security Committee staff last week on their respective approaches to frontier AI and national security, Axios reported.

The Five-Pillar Plan

The plan, titled “Cybersecurity in the Intelligence Age,” is organized around broadening access to AI defensive tools to “trusted actors across society”—a phrasing that implies not everyone qualifies. Pillar one focuses on democratizing cyber defense through infrastructure and access programs. Pillar two calls for closer coordination between government agencies and private companies. Pillar three is dedicated to securing frontier AI capabilities themselves from misuse. Pillars four and five address deployment controls and end-user protection, respectively.

The document reads less like a technical spec and more like a policy argument—it acknowledges that AI helps defenders but also lowers the barrier for malicious actors. “The same capabilities that help defenders identify vulnerabilities, automate remediation, and respond faster are also being used by malicious actors to scale attacks, lower barriers to entry, and increase sophistication,” the plan states.

The approach sets OpenAI apart from competitors. Anthropic explicitly declined to broadly release its Mythos model, with CEO Dario Amodei calling it “too dangerous to release” in an April 24 Time Magazine interview. OpenAI has taken the opposite tack—wider distribution, with access gated by what it calls “trusted” channels rather than hard restrictions. Mythos can run attacks autonomously at machine speed, a capability that no public model has offered before.

CISA Hasn’t Been Invited

There is a sharp irony embedded in Tuesday’s announcement. Staff at the Cybersecurity and Infrastructure Security Agency—the lead federal office for defending critical infrastructure—said they don’t have access to the latest models from either OpenAI or Anthropic, Forbes reported on April 27. CISA’s own analysts told Forbes the lack of access is impeding their ability to assess and protect U.S. infrastructure from AI-enabled threats.

OpenAI’s plan proposes deepening government access. But the proposal raises the same concern that has followed AI companies into every Congressional briefing this month: who decides what “trusted” means, and who’s left out? The government agency tasked with securing the nation’s infrastructure is currently on the outside of that circle.

The full plan is available from OpenAI as a PDF.

Leave your vote