Site icon Frontierbeat

Sam Altman Told His Employees OpenAI Has No Say in How the Pentagon Uses Its AI

The Pentagon building at night with the OpenAI logo reflected on a rain-soaked surface in the foreground, under stormy clouds, representing the controversial OpenAI Department of Defense contract signed in February 2026

OpenAI signed a Pentagon deal hours after Anthropic was blacklisted for refusing one. Sam Altman later admitted the company has no control over how the military uses its AI.

When Sam Altman faced his own employees after OpenAI signed a deal with the Pentagon, he said the quiet part out loud. “You do not get to make operational decisions,” he told staff — and then went further: “So maybe you think the Iran strike was good and the Venezuela invasion was bad. You don’t get to weigh in on that.” That’s the CEO of a company that built its brand on responsible AI telling his own team they have no control over what happens once their product leaves the building.

The timing was almost theatrical. On February 27, 2026, Anthropic refused a Pentagon deal over ethics concerns — specifically over demands to remove AI safety guardrails. That same day, Defense Secretary Pete Hegseth declared Anthropic a “supply chain risk to national security” — a designation never previously used against a US company. OpenAI signed its deal hours later.

Altman later admitted to employees that the deal was rushed and made the company look “opportunistic and sloppy.” He wasn’t wrong. OpenAI had to amend the contract days after signing to add explicit prohibitions on domestic surveillance and fully autonomous weapons systems — things that arguably should have been in the original agreement.

The Company That Once Banned Military Use

This story didn’t start in February. In January 2024, OpenAI quietly deleted the clause in its usage policy that explicitly banned military applications — including weapons development and warfare — with no announcement and no explanation beyond the language being “confusing.” Now AI-enabled systems linked to OpenAI’s technology have reportedly been used in US military targeting decisions in Iran and in the operation to seize Venezuelan leader Nicolás Maduro.

Anthropic’s CEO Dario Amodei didn’t stay quiet about it. In a 1,600-word internal memo reported by TechCrunch, Amodei called OpenAI’s messaging “mendacious” and labeled the whole arrangement “safety theater.” He accused Altman of offering “dictator-style praise to Trump” and suggested the real reason Anthropic was punished was that his company hadn’t donated to Trump — while OpenAI’s Greg Brockman gave $25 million to a Trump-aligned PAC.

For context on what Anthropic was actually protecting, consider that the UK government chose Claude specifically because of its safety architecture. Anthropic has consistently held that there are uses of AI it won’t allow regardless of who’s asking. That stance cost it a government contract and earned it a national security blacklist.

The Users Who Voted With Their Wallets

Outside the boardrooms, a movement called QuitGPT spread fast. More than 2.5 million users pledged to cancel their ChatGPT subscriptions, with a boycott website tracking cancellations in real time and protesters appearing outside OpenAI’s San Francisco headquarters. The backlash wasn’t fringe — it cut across civil liberties advocates, AI researchers, and everyday paying subscribers.

By March 5, Anthropic was reportedly back in talks with the Pentagon. Which means the entire standoff — the blacklist, the leaked memo, the public feud — may end with both companies working with the Defense Department anyway. The difference is that Anthropic made them fight for it. OpenAI just said yes on a Friday night and spent the following week cleaning it up.

Exit mobile version