U.S. California Federal Judge Rita Lin issued a preliminary injunction on March 26, indefinitely blocking the Pentagon’s supply chain risk label against Anthropic. The 43-page ruling pointedly stated that this action violates the First Amendment and due process rights, harshly criticizing it as typical illegal First Amendment retaliation.
(Background: Anthropic aims for a Q4 IPO! Valuation at $380 billion, racing with OpenAI for the IPO timing.)
(Additional context: Anthropic AI Economic Index lengthy report: automated trading workflow frequency has doubled, Claude is transitioning from a tool to a life assistant.)
Table of Contents
Toggle
A 43-page ruling has dealt a significant blow to the Pentagon’s retaliatory actions. California Federal Judge Rita Lin issued a preliminary injunction on the 26th, indefinitely blocking the Defense Department’s “supply chain risk” label against AI company Anthropic, as well as an order requiring federal agencies to sever business ties with them.
Judge Lin’s wording in the ruling was stern, directly asserting that the Pentagon’s actions are unconstitutional: “There is no provision in the relevant regulations that supports such an Orwellian concept (referring to government overreach, manipulation of language, or erasure of history), labeling an American company as a potential enemy and saboteur merely for expressing differing opinions from the government.”
Lin also announced that the ruling would be delayed for one week to allow time for the government to appeal.
The trigger for this confrontation was Anthropic’s two red lines set in the contract terms for the Claude AI model: Claude is not allowed to be used for autonomous weapon systems, nor for domestic mass surveillance.
What the Pentagon wanted was unrestricted access to “all legitimate uses” of Claude, especially in wartime scenarios. Department of Defense Chief Technology Officer Emil Michael emphasized in an interview with CNBC earlier this month: “We cannot allow a company with different policy preferences, whose preferences are embedded in the model, to contaminate the supply chain, resulting in our warriors using ineffective weapons, body armor, and protective gear.”
However, Anthropic remained unmoved, insisting that its contractual safeguards are protected speech. After negotiations broke down, Defense Secretary Pete Hegseth resorted to unprecedented measures in February of this year: listing Anthropic as a supply chain risk and issuing a joint order with Trump, demanding that all federal agencies sever ties with companies that do business with Anthropic.
The supply chain risk label had previously only been applied to companies suspected of connections to foreign adversaries such as China; now, it was being used against a domestic AI company, marking a historical first.
Anthropic argued that this label destroyed the company’s reputation and jeopardized hundreds of millions of dollars in government contracts, promptly filing a lawsuit in San Francisco federal court on March 9.
Lin’s ruling directly exposed the Pentagon’s true motives: “The purpose of these sweeping measures does not appear to be related to the national security interests claimed by the government. The Department of Defense’s records indicate that the reason for labeling Anthropic as a supply chain risk was its hostile stance taken through the media.” She further wrote:
“Punishing Anthropic for bringing the government’s procurement stance into the public eye is typical illegal First Amendment retaliation.”
What this means is that the government, while taking a certain administrative decision, may have other reasons on the surface, but the actual motivation is to punish Anthropic for its prior statements or positions.
Anthropic welcomed the ruling, with a spokesperson responding: “We appreciate the court’s swift action and are pleased that the court recognizes Anthropic is likely to prevail substantively. The purpose of this lawsuit is to protect Anthropic, our customers, and partners, but our focus remains on constructive collaboration with the government to ensure that all Americans can benefit from safe and reliable AI.”
This is not Defense Secretary Hegseth’s first setback in court. Earlier this month, a federal judge in Washington, D.C. ruled that the interview restrictions he imposed on several journalists violated the First Amendment; in February of this year, another judge found that his suppression of a Democratic senator’s speech was also unconstitutional.
The significance of this lawsuit extends far beyond the win or loss of a single company; it is defining a boundary for AI companies: the government can request access, but cannot force companies to abandon their built-in ethical safeguards; simultaneously, companies expressing their positions publicly are protected by the Constitution and cannot be coerced into compliance through a “supply chain risk” label.