Site icon Frontierbeat

Pentagon Told Anthropic They Were “Very Close” on Key Issues—One Week After Trump Killed the Deal

Pentagon Told Anthropic They Were "Very Close" on Key Issues—One Week After Trump Killed the Deal

Pentagon Told Anthropic They Were "Very Close" on Key Issues—One Week After Trump Killed the Deal

A newly revealed court filing has exposed a striking contradiction at the heart of the Trump administration’s conflict with AI company Anthropic.

According to documents submitted to a San Francisco federal court on March 20, Pentagon Under Secretary Emil Michael emailed Anthropic CEO Dario Amodei on March 4, stating that the two sides were “very close” on resolving the issues now cited as evidence of a national security threat. This communication came just one week after President Trump and Defense Secretary Pete Hegseth publicly declared the relationship with Anthropic finished, as reported by TechCrunch.

The revelation raises serious questions about whether the Pentagon’s subsequent supply-chain risk designation—the first ever applied to an American company—was a legitimate national security decision or political retaliation. Sarah Heck, Anthropic’s Head of Policy and a former National Security Council official, submitted a sworn declaration stating: “At no time during Anthropic’s negotiations with the Department did I or any other Anthropic employee state that the company wanted that kind of role.”

The company maintains that the government’s concerns were never raised during months of negotiations and only appeared for the first time in court filings.

What the “Supply Chain Risk” Designation Means for Defense AI Contracts

The dispute centers on Anthropic’s refusal to grant the Pentagon unrestricted use of its Claude AI models, particularly for autonomous weapons and mass surveillance of American citizens. In late February, following a meeting between Hegseth and Amodei that failed to produce an agreement, Trump ordered all federal agencies to cease using Anthropic technology and gave them six months to phase out existing deployments. CNN reported that this designation—typically reserved for companies seen as extensions of foreign adversaries—would prohibit any contractor, supplier, or partner doing business with the U.S. military from working with Anthropic.

On February 24, Amodei met with Hegseth and Michael at the Pentagon. By February 27, Trump had publicly denounced Anthropic on Truth Social. Yet on March 4—a full week later—Michael’s email indicated alignment was still possible. The very next day, Amodei published a statement noting the company had been having “productive conversations” with the Pentagon. Within 24 hours, Michael reversed course on X, posting that “there is no active Department of War negotiation with Anthropic.”

Anthropic’s court filings directly challenge the Pentagon’s technical claims. According to Thiyagu Ramasamy, Anthropic’s Head of Public Sector and former AWS executive, the company’s Claude models are deployed inside government-secured, “air-gapped” systems operated by third-party contractors.

The company insists there is no “remote kill switch, backdoor, or mechanism to push unauthorized updates,” and that Anthropic cannot see what government users type into the system. These concerns about potential sabotage, Anthropic argues, were never raised during negotiations.

First Amendment Battle and the Future of AI Safety in Defense

Reuters reported that in a March 17 filing, the Justice Department argued that Anthropic’s refusal to remove guardrails “is conduct, not protected speech,” and therefore not protected by the First Amendment. The filing states: “It was only when Anthropic refused to release the restrictions on the use of its products… that the President directed all federal agencies to terminate their business relationships with Anthropic.”

However, legal experts have suggested Anthropic may have a strong case. The company argues that the supply-chain risk designation constitutes government retaliation for its publicly stated AI safety views, violating constitutional protections. Anthropic’s lawsuit claims the “unprecedented and unlawful” designation violated free speech rights, due process rights, and federal procedural requirements. The company has warned that the designation could cause billions of dollars in losses and damage its reputation across commercial markets.

A hearing before Judge Rita Lin in San Francisco is scheduled for March 24, which could prove decisive. Microsoft and several retired military chiefs have filed supporting briefs urging the court to halt the Pentagon’s actions. For the crypto and AI industry, the outcome of this case could establish critical precedent regarding whether companies can maintain ethical guardrails on their technology—or whether government contracts come with an implicit requirement to abandon such positions.

Exit mobile version