Caitlin Kalinowski, who led robotics and consumer hardware at OpenAI, announced her resignation on Saturday over the company’s recent agreement to deploy its AI models on the Pentagon’s classified cloud networks. In a post on X, Kalinowski said she stepped down because OpenAI moved too quickly before finalizing the terms of the deal — specifically around safeguards for how the technology would be used by the military.
Her departure adds pressure to a company already navigating significant internal and public scrutiny. Kalinowski did not dispute the idea that AI has a role in national security, but drew a clear line: “Surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got,” she wrote on X.
Kalinowski acknowledged having deep respect for CEO Sam Altman and the broader team, but described the announcement as one made “without the guardrails defined” — framing it as a governance failure rather than a personal objection to defense work itself.
A Contract That Arrived Before the Rules Did
As Frontierbeat reported recently, OpenAI signed the Pentagon contract on February 27, 2026 — the same day Anthropic refused a similar agreement over demands to remove AI safety restrictions, and the same day Defense Secretary Pete Hegseth publicly designated Anthropic a “supply chain risk to national security.” OpenAI announced its deal hours later.
In the days that followed, the company had to amend the contract to add explicit prohibitions on domestic surveillance and fully autonomous weapons systems — provisions that critics argue should have been built into the original agreement, not added as an afterthought.
In a subsequent post on X, Kalinowski called it “a governance concern first and foremost,” adding that decisions of this magnitude are “too important for deals or announcements to be rushed.”
According to Reuters, OpenAI, issued a statement on Saturday reaffirming that its internal “red lines” prohibit the use of its technology in domestic surveillance or for autonomous weapons systems.
“We recognize that people have strong views about these issues and we will continue to engage in discussion with employees, government, civil society and communities around the world,” the company said to Reuters.
Kalinowski joined OpenAI in 2024, having previously led augmented reality hardware development at Meta Platforms. Her resignation marks one of the more prominent internal departures tied directly to a policy decision rather than a strategic or business disagreement — and it comes from someone who built physical systems, not just software, making her perspective on where AI meets real-world consequences particularly pointed.
The Broader Fallout Around OpenAI’s Defense Strategy
The internal dissent reflects a wider debate that has been building since early 2024, when OpenAI quietly removed a clause from its usage policy that had explicitly prohibited military applications, including weapons development. According to Frontierbeat, that change was made without a formal announcement, with the company saying only that the previous language was “confusing.” Since then, AI-enabled systems connected to OpenAI’s technology have reportedly been involved in US military targeting operations.
Anthropic CEO Dario Amodei addressed the situation in a lengthy internal memo, reported by TechCrunch, in which he criticized the arrangement as “safety theater” and described the messaging from OpenAI as misleading. Separately, a user movement called QuitGPT attracted more than 2.5 million people pledging to cancel their ChatGPT subscriptions, according to Frontierbeat, with protests appearing outside OpenAI’s San Francisco headquarters. The backlash extended well beyond activist circles, drawing in AI researchers, civil liberties advocates, and ordinary paying subscribers.
What is clear is that for at least some people inside OpenAI, the line between enabling national security and rushing past necessary safeguards is one the company crossed too quickly this time.
