India held an AI summit, with Prime Minister Modi on stage, flanked by a row of Silicon Valley executives. During the group photo, Modi raised the hand of the person next to him over his head, and others also linked hands, creating a very united scene.
But, only two people didn’t hold hands.
The CEOs of OpenAI and Anthropic, the companies behind ChatGPT and Claude, stood next to each other, each raising a fist.
No hand-holding, no eye contact—like two rivals forced to sit together by the teacher.
These two companies have been fighting fiercely in recent years. Claude was developed by a team that left OpenAI. They compete for users, enterprise clients, and funding. During the Super Bowl, Anthropic even spent money on ads mocking ChatGPT.
So, no hand-holding makes sense.
However, today they did shake hands. Because of the Pentagon.
Here’s what happened.
Anthropic, the company behind Claude, signed a contract with the U.S. Department of Defense last year, worth up to $200 million. Claude is the first AI model deployed on the U.S. military’s classified networks, assisting with intelligence analysis and mission planning.
But Anthropic drew two red lines in the contract:
Claude cannot be used for mass surveillance of American citizens, nor for autonomous weapons without human involvement. (See: The 72-Hour Identity Crisis of Anthropic)
However, the Pentagon refused to accept these restrictions.
Their demand was four words: unrestricted use. Once you buy the tools, you should be free to use them as you wish. What right does a tech company have to tell the U.S. military what it can or cannot do?
Last Tuesday, Defense Secretary Hegseth delivered a final ultimatum to Anthropic’s CEO: agree by 5:01 PM Friday, or face the consequences.
Anthropic did not agree.
Their CEO issued a public statement, saying: “We understand the importance of AI to U.S. defense, but in some cases, AI can harm rather than defend democratic values. We cannot, in good conscience, accept this demand.”
The Pentagon’s negotiator, Deputy Secretary of Defense Emil Michael, then publicly called him a liar on social media, accusing him of having a god complex and joking about national security.
A Brief Handshake
Then, an unexpected event occurred.
Over 400 employees from OpenAI and Google signed a joint open letter titled “We Will Not Be Divided.”
The letter stated that the Pentagon is negotiating with AI companies one by one, trying to get others to accept the same conditions Anthropic refused, using fear to divide companies.
OpenAI’s CEO also sent an internal memo to all staff, stating that OpenAI shares the same red lines as Anthropic:
No mass surveillance, no autonomous lethal weapons.
Just days ago, the two companies that initially refused to cooperate suddenly found themselves on the same side because of the Pentagon.
But this unity might only last a few hours.
At 5:01 PM Friday, the Pentagon’s final deadline expired. Anthropic did not sign.
A U.S. tech giant valued at $380 billion, risking the invalidation of a $200 million contract, refused the U.S. Department of Defense. In the past, this might have just meant canceling the contract and finding another supplier. But Washington’s reaction this time was anything but business-as-usual.
About an hour later, Trump posted on Truth Social, calling Anthropic a “left-wing lunatic,” accusing them of trying to override the Constitution and joking about the lives of American soldiers.
He demanded all federal agencies immediately stop using Anthropic’s technology.
Soon after, Defense Secretary Hegseth announced that Anthropic was designated a “supply chain security risk.” This label is usually reserved for companies like Huawei. The message was clear: all contractors doing business with the U.S. military are now forbidden from using Anthropic’s products.
Anthropic said it would file a lawsuit.
That same evening, however, OpenAI, which had maintained the same stance, signed an agreement with the Pentagon.
Ideological Issues
What did OpenAI get?
After Claude was pushed out, they remained the provider of AI for the U.S. military’s classified networks. But OpenAI proposed three conditions to the Pentagon: no mass surveillance, no autonomous weapons, and high-risk decisions must involve humans.
The Pentagon agreed.
You read that right. The conditions that Anthropic refused for weeks, proposed by a different company, were agreed upon in just a few days?
Of course, their plans aren’t exactly the same.
Anthropic demanded an additional layer: they believe current laws lag behind AI capabilities. For example, AI can legally purchase and aggregate your location data, browsing history, and social media information—effectively monitoring you—yet each step remains legal.
Anthropic argued that simply writing “no surveillance” isn’t enough; loopholes must be closed. OpenAI didn’t insist on this point; they accepted the Pentagon’s stance, believing current laws are sufficient.
But if you think this is just a disagreement over a clause, you’re naive. From the start, this negotiation was about more than just terms.
White House AI czar David Sacks publicly criticized Anthropic for “woke AI” (prioritizing ideology and political correctness); senior Pentagon officials told the media that Dario’s issues are driven by ideology, “We know who we’re dealing with.”
Elon Musk’s xAI, a direct competitor of Anthropic, has repeatedly attacked them on X this week, claiming the company “hates Western civilization.”
And last year, Anthropic’s CEO did not attend Trump’s inauguration; OpenAI’s CEO did.
A Lesson for the Future
Let’s summarize what happened.
Same principles, same red lines—Anthropic, demanding an extra layer of safeguards, choosing the wrong side and misjudging the situation, was branded as a national security threat on par with Huawei.
OpenAI, with fewer demands, maintained good relations and secured the contract. Was this a victory of principles or a price paid for principles?
The Pentagon’s contract resistance isn’t new.
In 2018, over 4,000 Google employees signed a petition, and a dozen resigned in protest, opposing Google’s involvement in Project Maven, which used AI to analyze drone footage to help the military identify targets faster.
Google ultimately withdrew and did not renew the contract. The employees won.
Eight years later, the same controversy has resurfaced. But this time, the rules have changed completely. An American company can do military business, but with two restrictions. The U.S. government’s response: exclude it from the entire federal system.
And the “supply chain security risk” label is far more damaging than losing a $200 million contract.
Anthropic’s revenue this year is around $14 billion; the $200 million contract is a tiny fraction. But this label means any company doing business with the U.S. military cannot use Claude.
These companies don’t need to agree with the Pentagon’s stance—they only need to conduct a risk assessment: continue using Claude and risk losing government contracts; switch models and everything stays the same.
The choice is simple. That’s the real signal behind this incident.
It doesn’t matter whether Anthropic can withstand the pressure; what matters is whether the next company dares to. They will observe the outcome, consider the cost of sticking to principles, and make a very rational decision.
Looking back at that photo from India, everyone’s hand over their head, only those two holding fists.
Maybe that’s the norm.
AI companies can share principles, but their hands don’t necessarily have to be linked.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
What brings GPT and Claude together is their joint opposition to the Pentagon?
Author: Curry, Deep Tide TechFlow
A few days ago, a photo went viral online.
India held an AI summit, with Prime Minister Modi on stage, flanked by a row of Silicon Valley executives. During the group photo, Modi raised the hand of the person next to him over his head, and others also linked hands, creating a very united scene.
But, only two people didn’t hold hands.
The CEOs of OpenAI and Anthropic, the companies behind ChatGPT and Claude, stood next to each other, each raising a fist.
No hand-holding, no eye contact—like two rivals forced to sit together by the teacher.
These two companies have been fighting fiercely in recent years. Claude was developed by a team that left OpenAI. They compete for users, enterprise clients, and funding. During the Super Bowl, Anthropic even spent money on ads mocking ChatGPT.
So, no hand-holding makes sense.
However, today they did shake hands. Because of the Pentagon.
Here’s what happened.
Anthropic, the company behind Claude, signed a contract with the U.S. Department of Defense last year, worth up to $200 million. Claude is the first AI model deployed on the U.S. military’s classified networks, assisting with intelligence analysis and mission planning.
But Anthropic drew two red lines in the contract:
Claude cannot be used for mass surveillance of American citizens, nor for autonomous weapons without human involvement. (See: The 72-Hour Identity Crisis of Anthropic)
However, the Pentagon refused to accept these restrictions.
Their demand was four words: unrestricted use. Once you buy the tools, you should be free to use them as you wish. What right does a tech company have to tell the U.S. military what it can or cannot do?
Last Tuesday, Defense Secretary Hegseth delivered a final ultimatum to Anthropic’s CEO: agree by 5:01 PM Friday, or face the consequences.
Anthropic did not agree.
Their CEO issued a public statement, saying: “We understand the importance of AI to U.S. defense, but in some cases, AI can harm rather than defend democratic values. We cannot, in good conscience, accept this demand.”
The Pentagon’s negotiator, Deputy Secretary of Defense Emil Michael, then publicly called him a liar on social media, accusing him of having a god complex and joking about national security.
A Brief Handshake
Then, an unexpected event occurred.
Over 400 employees from OpenAI and Google signed a joint open letter titled “We Will Not Be Divided.”
The letter stated that the Pentagon is negotiating with AI companies one by one, trying to get others to accept the same conditions Anthropic refused, using fear to divide companies.
OpenAI’s CEO also sent an internal memo to all staff, stating that OpenAI shares the same red lines as Anthropic:
No mass surveillance, no autonomous lethal weapons.
Just days ago, the two companies that initially refused to cooperate suddenly found themselves on the same side because of the Pentagon.
But this unity might only last a few hours.
At 5:01 PM Friday, the Pentagon’s final deadline expired. Anthropic did not sign.
A U.S. tech giant valued at $380 billion, risking the invalidation of a $200 million contract, refused the U.S. Department of Defense. In the past, this might have just meant canceling the contract and finding another supplier. But Washington’s reaction this time was anything but business-as-usual.
About an hour later, Trump posted on Truth Social, calling Anthropic a “left-wing lunatic,” accusing them of trying to override the Constitution and joking about the lives of American soldiers.
He demanded all federal agencies immediately stop using Anthropic’s technology.
Soon after, Defense Secretary Hegseth announced that Anthropic was designated a “supply chain security risk.” This label is usually reserved for companies like Huawei. The message was clear: all contractors doing business with the U.S. military are now forbidden from using Anthropic’s products.
Anthropic said it would file a lawsuit.
That same evening, however, OpenAI, which had maintained the same stance, signed an agreement with the Pentagon.
Ideological Issues
What did OpenAI get?
After Claude was pushed out, they remained the provider of AI for the U.S. military’s classified networks. But OpenAI proposed three conditions to the Pentagon: no mass surveillance, no autonomous weapons, and high-risk decisions must involve humans.
The Pentagon agreed.
You read that right. The conditions that Anthropic refused for weeks, proposed by a different company, were agreed upon in just a few days?
Of course, their plans aren’t exactly the same.
Anthropic demanded an additional layer: they believe current laws lag behind AI capabilities. For example, AI can legally purchase and aggregate your location data, browsing history, and social media information—effectively monitoring you—yet each step remains legal.
Anthropic argued that simply writing “no surveillance” isn’t enough; loopholes must be closed. OpenAI didn’t insist on this point; they accepted the Pentagon’s stance, believing current laws are sufficient.
But if you think this is just a disagreement over a clause, you’re naive. From the start, this negotiation was about more than just terms.
White House AI czar David Sacks publicly criticized Anthropic for “woke AI” (prioritizing ideology and political correctness); senior Pentagon officials told the media that Dario’s issues are driven by ideology, “We know who we’re dealing with.”
Elon Musk’s xAI, a direct competitor of Anthropic, has repeatedly attacked them on X this week, claiming the company “hates Western civilization.”
And last year, Anthropic’s CEO did not attend Trump’s inauguration; OpenAI’s CEO did.
A Lesson for the Future
Let’s summarize what happened.
Same principles, same red lines—Anthropic, demanding an extra layer of safeguards, choosing the wrong side and misjudging the situation, was branded as a national security threat on par with Huawei.
OpenAI, with fewer demands, maintained good relations and secured the contract. Was this a victory of principles or a price paid for principles?
The Pentagon’s contract resistance isn’t new.
In 2018, over 4,000 Google employees signed a petition, and a dozen resigned in protest, opposing Google’s involvement in Project Maven, which used AI to analyze drone footage to help the military identify targets faster.
Google ultimately withdrew and did not renew the contract. The employees won.
Eight years later, the same controversy has resurfaced. But this time, the rules have changed completely. An American company can do military business, but with two restrictions. The U.S. government’s response: exclude it from the entire federal system.
And the “supply chain security risk” label is far more damaging than losing a $200 million contract.
Anthropic’s revenue this year is around $14 billion; the $200 million contract is a tiny fraction. But this label means any company doing business with the U.S. military cannot use Claude.
These companies don’t need to agree with the Pentagon’s stance—they only need to conduct a risk assessment: continue using Claude and risk losing government contracts; switch models and everything stays the same.
The choice is simple. That’s the real signal behind this incident.
It doesn’t matter whether Anthropic can withstand the pressure; what matters is whether the next company dares to. They will observe the outcome, consider the cost of sticking to principles, and make a very rational decision.
Looking back at that photo from India, everyone’s hand over their head, only those two holding fists.
Maybe that’s the norm.
AI companies can share principles, but their hands don’t necessarily have to be linked.