- Judge Rita Lin ruled the Pentagon’s “supply chain risk” label against Anthropic was likely unlawful and arbitrary, blocking the federal ban.
- Anthropic’s lawsuit alleges First Amendment violations, due process failures, and unauthorized executive action across multiple federal agencies.
- The ruling may set a precedent for how the U.S. government can restrict AI vendors on national security grounds without formal legal procedures.
Anthropic, the artificial intelligence company behind the Claude chatbot, has secured a significant legal victory against the Trump administration. Federal Judge Rita Lin granted a temporary injunction blocking the government’s punitive measures against the AI firm, which had been designated as a “supply chain risk” by the Department of Defense.
According to CNN, the dispute originated when Anthropic refused to allow the Pentagon unrestricted use of its Claude AI model, specifically drawing red lines against deploying the technology for fully autonomous lethal weapons and domestic mass surveillance.
Following a failed meeting between CEO Dario Amodei and Defense Secretary Pete Hegseth on February 24, 2026, the Trump administration escalated the conflict by ordering all federal agencies and military contractors to halt business with Anthropic, as reported by CNN.
Anthropic’s Legal Battle Against the Federal Government Ban
In her ruling, Judge Rita Lin found that the government had overstepped its authority in its attempts to punish and coerce Anthropic. The judge stated that the Pentagon’s designation of Anthropic as a “supply chain risk” is “likely both contrary to law and arbitrary and capricious.” During the hearing, Lin made pointed observations about the government’s actions, saying “it looks like an attempt to cripple Anthropic” and noting that the Department of Defense provided “no legitimate basis to infer from Anthropic’s forthright insistence on usage restrictions that it might become a saboteur.”
The lawsuit filed by Anthropic on March 9, 2026, in the U.S. District Court for the Northern District of California, names multiple federal agencies as defendants, including the Department of Treasury, Department of Commerce, Department of State, and the General Services Administration, among others.
According to CoinDesk, Anthropic argues it was effectively shut out of government procurement because officials informally imposed nationwide contracting restrictions on national security grounds without formal determination, interagency review, or documented evidence.
The company’s legal claims include allegations of First Amendment violations, arguing that the government is retaliating against Anthropic for its protected speech rights in publicly defending its ethical red lines. Anthropic also contends that President Trump does not have the authority to direct federal agencies to cease using the company’s technology without proper legal procedures, and that the company was denied adequate due process.
Broader Implications for AI Regulation and Federal Procurement
The temporary injunction, which Judge Lin stayed for one week to allow the government to respond, has significant implications beyond this specific case. The ruling could set a precedent on how far federal agencies can go when restricting AI vendors on national security grounds without following established procurement rules.
Dozens of scientists and researchers from OpenAI and Google DeepMind filed an amicus brief supporting Anthropic, arguing that the supply chain risk designation could harm U.S. competitiveness in the AI industry and hamper public discussions about the risks and benefits of artificial intelligence.
Despite the government conflict, Anthropic’s Claude AI app surpassed OpenAI’s ChatGPT in the iPhone’s App Store rankings following the Pentagon’s initial announcement, and the company reported more than one million daily sign-ups as of early March 2026.
The case arrives at a critical juncture as the U.S. government undergoes its largest AI adoption push in federal history. While Anthropic has been sidelined, OpenAI struck a deal with the Pentagon just hours after the Trump administration’s February 27 order against Anthropic.
The outcome of this legal battle will likely influence how AI companies negotiate ethical boundaries with government clients and could shape the framework for AI deployment in sensitive national security contexts for years to come.

