Federal judge temporarily blocks Pentagon from blacklisting Anthropic, ruling unconstitutional retaliation, $380 billion AI giant’s IPO plans continue.
The U.S. AI industry recently celebrated a landmark judicial victory. Federal judge Rita Lin in California officially issued a preliminary injunction last Thursday (3/26), preventing the Trump administration from placing AI giant Anthropic on a supply chain risk blacklist. This ruling directly challenges the executive orders from the Pentagon and Defense Secretary Pete Hegseth, with the judge’s harsh wording pointing out the irony of the government labeling a domestic American company as a “potential adversary” and “disruptor.”
Rita Lin emphasized that the current legal framework does not support this “Orwellian” administrative logic. The administrative system’s view of a company as a national security threat simply because it expresses a position different from the government is clearly beyond the authorized boundary. The court requires the federal government to immediately cease enforcing the relevant restrictions and to submit a compliance report by April 6, detailing how the ban will be lifted.
The core of this legal battle revolves around whether the government is using national security as a pretext for retaliating against dissenting speech. According to documents revealed by the court, the Defense Department’s (or War Department’s) actions against Anthropic were largely based on the company’s public statements in the media, without conducting a rigorous technical security assessment. The judge pointed out that punishing Anthropic drew public attention to the government’s contractual stance, constituting retaliatory behavior in violation of the First Amendment. During a hearing held on Tuesday in San Francisco, Rita Lin questioned the government’s threshold for defining a company as a “threat” as being too low.
She argued that if the Pentagon has concerns about technology control, it could simply stop using the Claude model, and applying a “supply chain risk” label that carries connotations of shame and destruction is inappropriate. This ruling temporarily restores Anthropic’s collaborative status with federal contractors, establishing a legal shield for future AI companies facing government pressure.
Tracing back to the origin of this conflict, it stems from a $200 million contract signed between Anthropic and the Pentagon in July 2025. At that time, both parties agreed to allow Claude to become the first cutting-edge AI model permitted to operate on classified networks.
However, the partnership broke down in February 2026. The Defense Department attempted to renegotiate the contract terms, demanding that Anthropic remove all restrictions, allowing the military to use Claude for all lawful purposes. Anthropic, which has always advocated for AI safety, held firm and refused to apply its technology to lethal autonomous weapon systems or large-scale domestic surveillance activities targeting American citizens. Company executives believe that current AI models do not possess sufficient safety and accuracy for those sensitive industries, and recklessly deploying them in combat or surveillance could lead to catastrophic consequences.
Subsequently, the situation escalated rapidly, with the administrative response exhibiting strong personal emotions. On February 24, Secretary of Defense Hegseth issued an ultimatum to Anthropic during a meeting, threatening immediate sanctions if it did not remove the restrictions. After Anthropic refused to compromise, President Trump publicly denounced the company on Truth Social on February 27 as a “radical left, woke corporation,” accusing it of attempting to intimidate the Defense Department, and then ordered all federal agencies to stop using Anthropic’s products.
Immediately afterward, Hegseth described Anthropic’s stance as “a masterpiece of arrogance and betrayal,” and officially issued a supply chain risk label. This extreme measure, typically reserved for foreign intelligence agencies or terrorist organizations, was applied for the first time to a domestic tech company. Experts believe that this weaponization of legal measures reflects the administration’s impatience when faced with challenges to technical ethics.
Despite being caught in a political and legal storm, Anthropic continues to show strong performance in the capital markets, demonstrating remarkable resilience. Data up to 2025 shows that Anthropic holds a 32% market share in the enterprise AI market, surpassing OpenAI’s 25%, positioning it as an industry leader. In a recent funding round completed in February this year, the valuation of this AI star company soared to $380 billion, with the round led by the sovereign wealth fund MGX.
Meanwhile, technology giants such as Alphabet’s Google, Amazon, Microsoft, and Nvidia have all established strong partnerships with Anthropic, with cumulative investment and infrastructure deal amounts reaching tens of billions. The support from these Silicon Valley giants provides Anthropic with the economic backing to withstand political pressures.
The latest court ruling paves the way for Anthropic to enter public markets. Sources reveal that Anthropic is accelerating its IPO (Initial Public Offering) plans, with the earliest possible listing in October 2026, expected to raise more than $60 billion. Currently, top investment banks like Goldman Sachs, JPMorgan, and Morgan Stanley are listed as potential underwriting advisors.
Legal experts point out that if the government’s blacklist ban remains in effect, it would significantly reduce Anthropic’s market share and severely undermine investor confidence. Now that the judge has ruled the ban unconstitutional, it protects the company’s business interests. This also sends a message to the market: the commitment of companies to AI safety ethics should be legally protected and not serve as an excuse for government suppression.
Beyond the legal battle, Anthropic’s infrastructure expansion has not slowed down. Google is planning to support a $5 billion data center project in Texas, which will be fully leased to Anthropic. This facility, spanning 2,800 acres, will be operated by Nexus Data Centers and is expected to provide approximately 500 megawatts (MW) of power capacity by the end of 2026, enough to meet the electricity needs of 500,000 households, with potential future expansion to 7.7 gigawatts (GW). The project’s prime location near major natural gas pipelines allows it to utilize onsite gas turbines for self-powering, ensuring high reliability for AI computing power. This ambitious infrastructure project demonstrates the tech industry’s firm investment intentions in long-term AI infrastructure.
Additionally, according to foreign media reports, there has been a path dependency on the Claude model within the U.S. military. Even though the White House issued a strict ban, the U.S. Central Command (CENTCOM) has continued to use Anthropic’s AI model for operational support in recent airstrike actions against Iran. This exposes a disconnect between government policy and frontline needs.
The administrative department listed it as a risk due to political positions, while the operational units chose to rely on it due to its technical advantages. The construction of the data center and its practical application in operations demonstrate Anthropic’s importance. With the court restoring its status as a federal supplier, this AI giant will continue to navigate the balance between technical ethics and national interest while responding to potential government appeals in the future.