Anthropic CEO Dario Amodei sat across from Defense Secretary Pete Hegseth. According to multiple media outlets including NPR and CNN, the meeting was “polite” in tone, but the content was anything but courteous.
Hegseth delivered a final ultimatum: by 5:01 PM Friday, lift restrictions on Claude’s military use, allowing the Pentagon to deploy it for “all lawful purposes,” including autonomous weapon targeting and domestic mass surveillance.
Otherwise, cancel the $200 million contract. Invoke the Defense Production Act for compulsory requisition. List Anthropic as a “supply chain risk,” effectively placing it on the blacklist of hostile entities from Russia and China.
On the same day, Anthropic quietly released version 3.0 of its “Responsible Expansion Policy” (RSP 3.0), removing the company’s core commitment: if safety measures cannot be guaranteed, do not train more powerful models.
Also on that day, Elon Musk posted on X: “Anthropic is mass stealing training data, that’s a fact.” Community notes on X added reports that Anthropic paid $1.5 billion in settlement for training Claude using pirated books.
Within 72 hours, this AI company, claiming to have a “soul,” played three roles simultaneously: safety martyr, intellectual property thief, and Pentagon traitor.
Which one is true?
Maybe all of them.
Pentagon’s “Either Obey or Get Out”
The story’s first layer is simple.
Anthropic is the first AI company granted classified access by the U.S. Department of Defense. The contract was awarded last summer, with a cap of $200 million. Subsequently, OpenAI, Google, and xAI also received contracts of similar scale.
According to Al Jazeera, Claude was used in a U.S. military operation in January this year, reportedly involving the kidnapping of Venezuelan President Maduro.
But Anthropic drew two red lines: no support for fully autonomous weapon targeting, and no support for large-scale surveillance of U.S. citizens. Anthropic believes AI’s reliability is insufficient to control weapons, and currently no laws or regulations govern AI in mass surveillance.
The Pentagon isn’t buying it.
White House AI advisor David Sacks publicly accused Anthropic on X last October of “weaponizing fear to capture regulation.”
Competitors have already bowed. OpenAI, Google, and xAI agree to let the military use their AI for “all lawful scenarios.” Musk’s Grok was just approved to access classified systems this week.
Anthropic is the last standing.
As of press time, Anthropic stated in its latest release that they do not intend to back down. But the Friday 5:01 deadline is looming.
An anonymous former DOJ and DOD liaison told CNN: “How can you simultaneously declare a company a ‘supply chain risk’ and force it to work for your military?”
Good question, but it’s outside the Pentagon’s considerations. They care about whether Anthropic will compromise or be forced to comply, or be abandoned by Washington.
“Distillation Attack”: A Face-Slapping Accusation
On February 23, Anthropic published a strongly worded blog accusing three Chinese AI companies of conducting an “industrial-scale distillation attack” on Claude.
The accused are DeepSeek, Moonshot AI, and MiniMax.
Anthropic claims they used over 24,000 fake accounts to initiate more than 16 million interactions with Claude, targeting its core reasoning, tool invocation, and programming capabilities.
Anthropic characterizes this as a national security threat, asserting that models distilled from this data are unlikely to retain safety guardrails and could be exploited by authoritarian governments for cyberattacks, disinformation, and mass surveillance.
The narrative is perfect, and the timing is impeccable.
Just after the Trump administration eased export controls on Chinese chips, and when Anthropic was seeking ammunition for lobbying against chip export restrictions.
But Musk shot back: “Anthropic is mass stealing training data and paid billions in settlement. That’s a fact.”
AI infrastructure company IO.Net co-founder Tory Green said: “You train your model on the entire web’s data, then others learn from your public API—that’s called ‘distillation attack’?”
Anthropic calls distillation an “attack,” but this is common practice in the AI industry. OpenAI used it to compress GPT-4, Google to optimize Gemini, and even Anthropic itself does it. The only difference is, this time, they are the ones being distilled.
According to Erik Cambria, an AI professor at Nanyang Technological University in Singapore, speaking to CNBC: “The line between legitimate use and malicious exploitation is often blurry.”
Ironically, Anthropic paid $1.5 billion in settlement for training Claude with pirated books. It trains models on the entire web, then accuses others of learning from its public API. This isn’t double standards—it’s triple standards.
Anthropic wanted to play the victim but ended up as the defendant.
The Dismantling of Safety Commitments: RSP 3.0
On the same day as its confrontation with the Pentagon and public spat with Silicon Valley, Anthropic released version 3.0 of its Responsible Expansion Policy.
Anthropic Chief Scientist Jared Kaplan said in an interview: “We believe stopping AI training doesn’t help anyone. In the context of rapid AI development, making unilateral commitments… while competitors push full speed ahead, makes no sense.”
In other words, if others are acting unethically, we won’t pretend to be virtuous anymore.
Core to RSP 1.0 and 2.0 was a strict promise: if a model’s capabilities exceed safety measures, training would be paused. This promise earned Anthropic a unique reputation in the AI safety community.
But 3.0 removes that.
Instead, it introduces a more “flexible” framework, dividing safety measures that Anthropic can implement on its own from safety recommendations requiring industry-wide collaboration. Risk reports will be issued every 3-6 months, reviewed by external experts.
Sound responsible?
Chris Painter, an independent reviewer from nonprofit METR, said after reviewing early drafts: “This indicates that Anthropic believes it needs to enter a ‘triage mode’ because its methods for assessing and mitigating risks can’t keep pace with capability growth. It more likely reflects society’s unpreparedness for AI’s potentially catastrophic risks.”
According to TIME, Anthropic spent nearly a year internally discussing this rewrite, with CEO Amodei and the board unanimously approving. Officially, the original policy aimed to promote industry consensus, but the industry never caught up. The Trump administration’s laissez-faire attitude toward AI development, even attempting to repeal state regulations, left federal AI legislation in limbo. Although establishing a global governance framework in 2023 still seemed possible, three years later, that door is closed.
A long-time AI governance researcher, speaking anonymously, said more bluntly: “RSP is Anthropic’s most valuable brand asset. Removing the pause on training is like an organic food company secretly tearing off the ‘organic’ label from its packaging and then claiming their testing is now more transparent.”
Valuation of $380 Billion and Identity Crisis
In early February, Anthropic raised $300 million at a $380 billion valuation, with Amazon as a cornerstone investor. Since its founding, it has achieved an annualized revenue of $14 billion. Over the past three years, this figure has grown more than tenfold annually.
Meanwhile, the Pentagon threatened to blacklist it. Musk publicly accused it of data theft. Its core safety commitments were removed. After the resignation of AI safety head Mrinank Sharma, he posted on X: “The world is in danger.”
Contradiction?
Perhaps contradiction is in Anthropic’s DNA.
Founded by former OpenAI executives concerned about OpenAI’s rapid push on safety, they built their own company to develop more powerful models faster, while telling the world how dangerous these models are.
Their business model can be summarized as: we fear AI more than anyone else, so you should fund us to build AI.
This narrative worked perfectly in 2023-2024. AI safety was a hot topic in Washington, and Anthropic was the most favored lobbyist.
By 2026, the tide turned.
“Woke AI” became a slur, state-level AI regulation bills were blocked by the White House, and the California SB 53 supported by Anthropic was signed into law but the federal landscape remained barren.
Anthropic’s safety brand is slipping from a “differentiation advantage” to a “political liability.”
It is walking a complex tightrope, needing enough “safety” to maintain its brand, but also enough “flexibility” to avoid being abandoned by markets and governments. The problem is, both tolerances are shrinking.
How Much Is the Safety Narrative Worth?
Stacking these three issues together makes the picture clear.
Accusing Chinese companies of distilling Claude is to reinforce the narrative for chip export controls. Removing safety pause commitments to stay in the arms race. Refusing Pentagon’s autonomous weapons demands to preserve the last moral veneer.
Every step makes sense logically, but they contradict each other.
You can’t claim Chinese companies’ “distillation” endangers national security and simultaneously remove your own safety commitments to prevent your models from going out of control. If models are truly so dangerous, you should be more cautious, not more aggressive.
Unless you’re Anthropic.
In the AI industry, identity isn’t defined by your statements but by your balance sheet. Anthropic’s “safety” narrative is essentially a brand premium.
In the early AI arms race, this premium was worth money. Investors paid higher valuations for “responsible AI,” governments greenlit “trustworthy AI,” and customers paid for “safer AI.”
But by 2026, this premium is evaporating.
Anthropic now faces not a “whether to compromise” dilemma but a “whom to compromise first” ranking. Compromise with the Pentagon damages its brand. Compromise with competitors nullifies safety commitments. Compromise with investors means losing on both ends.
By 5:01 PM Friday, Anthropic will deliver its answer.
But whatever the answer, one thing is certain: the Anthropic that once thrived on “we’re different from OpenAI” is becoming just like everyone else.
The end of an identity crisis often means the disappearance of identity itself.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
Pentagon's Final Ultimatum: Anthropic's 72-Hour Life-or-Death Crisis
Author: Ada, Deep Tide TechFlow
Tuesday, February 24. Washington, Pentagon.
Anthropic CEO Dario Amodei sat across from Defense Secretary Pete Hegseth. According to multiple media outlets including NPR and CNN, the meeting was “polite” in tone, but the content was anything but courteous.
Hegseth delivered a final ultimatum: by 5:01 PM Friday, lift restrictions on Claude’s military use, allowing the Pentagon to deploy it for “all lawful purposes,” including autonomous weapon targeting and domestic mass surveillance.
Otherwise, cancel the $200 million contract. Invoke the Defense Production Act for compulsory requisition. List Anthropic as a “supply chain risk,” effectively placing it on the blacklist of hostile entities from Russia and China.
On the same day, Anthropic quietly released version 3.0 of its “Responsible Expansion Policy” (RSP 3.0), removing the company’s core commitment: if safety measures cannot be guaranteed, do not train more powerful models.
Also on that day, Elon Musk posted on X: “Anthropic is mass stealing training data, that’s a fact.” Community notes on X added reports that Anthropic paid $1.5 billion in settlement for training Claude using pirated books.
Within 72 hours, this AI company, claiming to have a “soul,” played three roles simultaneously: safety martyr, intellectual property thief, and Pentagon traitor.
Which one is true?
Maybe all of them.
Pentagon’s “Either Obey or Get Out”
The story’s first layer is simple.
Anthropic is the first AI company granted classified access by the U.S. Department of Defense. The contract was awarded last summer, with a cap of $200 million. Subsequently, OpenAI, Google, and xAI also received contracts of similar scale.
According to Al Jazeera, Claude was used in a U.S. military operation in January this year, reportedly involving the kidnapping of Venezuelan President Maduro.
But Anthropic drew two red lines: no support for fully autonomous weapon targeting, and no support for large-scale surveillance of U.S. citizens. Anthropic believes AI’s reliability is insufficient to control weapons, and currently no laws or regulations govern AI in mass surveillance.
The Pentagon isn’t buying it.
White House AI advisor David Sacks publicly accused Anthropic on X last October of “weaponizing fear to capture regulation.”
Competitors have already bowed. OpenAI, Google, and xAI agree to let the military use their AI for “all lawful scenarios.” Musk’s Grok was just approved to access classified systems this week.
Anthropic is the last standing.
As of press time, Anthropic stated in its latest release that they do not intend to back down. But the Friday 5:01 deadline is looming.
An anonymous former DOJ and DOD liaison told CNN: “How can you simultaneously declare a company a ‘supply chain risk’ and force it to work for your military?”
Good question, but it’s outside the Pentagon’s considerations. They care about whether Anthropic will compromise or be forced to comply, or be abandoned by Washington.
“Distillation Attack”: A Face-Slapping Accusation
On February 23, Anthropic published a strongly worded blog accusing three Chinese AI companies of conducting an “industrial-scale distillation attack” on Claude.
The accused are DeepSeek, Moonshot AI, and MiniMax.
Anthropic claims they used over 24,000 fake accounts to initiate more than 16 million interactions with Claude, targeting its core reasoning, tool invocation, and programming capabilities.
Anthropic characterizes this as a national security threat, asserting that models distilled from this data are unlikely to retain safety guardrails and could be exploited by authoritarian governments for cyberattacks, disinformation, and mass surveillance.
The narrative is perfect, and the timing is impeccable.
Just after the Trump administration eased export controls on Chinese chips, and when Anthropic was seeking ammunition for lobbying against chip export restrictions.
But Musk shot back: “Anthropic is mass stealing training data and paid billions in settlement. That’s a fact.”
AI infrastructure company IO.Net co-founder Tory Green said: “You train your model on the entire web’s data, then others learn from your public API—that’s called ‘distillation attack’?”
Anthropic calls distillation an “attack,” but this is common practice in the AI industry. OpenAI used it to compress GPT-4, Google to optimize Gemini, and even Anthropic itself does it. The only difference is, this time, they are the ones being distilled.
According to Erik Cambria, an AI professor at Nanyang Technological University in Singapore, speaking to CNBC: “The line between legitimate use and malicious exploitation is often blurry.”
Ironically, Anthropic paid $1.5 billion in settlement for training Claude with pirated books. It trains models on the entire web, then accuses others of learning from its public API. This isn’t double standards—it’s triple standards.
Anthropic wanted to play the victim but ended up as the defendant.
The Dismantling of Safety Commitments: RSP 3.0
On the same day as its confrontation with the Pentagon and public spat with Silicon Valley, Anthropic released version 3.0 of its Responsible Expansion Policy.
Anthropic Chief Scientist Jared Kaplan said in an interview: “We believe stopping AI training doesn’t help anyone. In the context of rapid AI development, making unilateral commitments… while competitors push full speed ahead, makes no sense.”
In other words, if others are acting unethically, we won’t pretend to be virtuous anymore.
Core to RSP 1.0 and 2.0 was a strict promise: if a model’s capabilities exceed safety measures, training would be paused. This promise earned Anthropic a unique reputation in the AI safety community.
But 3.0 removes that.
Instead, it introduces a more “flexible” framework, dividing safety measures that Anthropic can implement on its own from safety recommendations requiring industry-wide collaboration. Risk reports will be issued every 3-6 months, reviewed by external experts.
Sound responsible?
Chris Painter, an independent reviewer from nonprofit METR, said after reviewing early drafts: “This indicates that Anthropic believes it needs to enter a ‘triage mode’ because its methods for assessing and mitigating risks can’t keep pace with capability growth. It more likely reflects society’s unpreparedness for AI’s potentially catastrophic risks.”
According to TIME, Anthropic spent nearly a year internally discussing this rewrite, with CEO Amodei and the board unanimously approving. Officially, the original policy aimed to promote industry consensus, but the industry never caught up. The Trump administration’s laissez-faire attitude toward AI development, even attempting to repeal state regulations, left federal AI legislation in limbo. Although establishing a global governance framework in 2023 still seemed possible, three years later, that door is closed.
A long-time AI governance researcher, speaking anonymously, said more bluntly: “RSP is Anthropic’s most valuable brand asset. Removing the pause on training is like an organic food company secretly tearing off the ‘organic’ label from its packaging and then claiming their testing is now more transparent.”
Valuation of $380 Billion and Identity Crisis
In early February, Anthropic raised $300 million at a $380 billion valuation, with Amazon as a cornerstone investor. Since its founding, it has achieved an annualized revenue of $14 billion. Over the past three years, this figure has grown more than tenfold annually.
Meanwhile, the Pentagon threatened to blacklist it. Musk publicly accused it of data theft. Its core safety commitments were removed. After the resignation of AI safety head Mrinank Sharma, he posted on X: “The world is in danger.”
Contradiction?
Perhaps contradiction is in Anthropic’s DNA.
Founded by former OpenAI executives concerned about OpenAI’s rapid push on safety, they built their own company to develop more powerful models faster, while telling the world how dangerous these models are.
Their business model can be summarized as: we fear AI more than anyone else, so you should fund us to build AI.
This narrative worked perfectly in 2023-2024. AI safety was a hot topic in Washington, and Anthropic was the most favored lobbyist.
By 2026, the tide turned.
“Woke AI” became a slur, state-level AI regulation bills were blocked by the White House, and the California SB 53 supported by Anthropic was signed into law but the federal landscape remained barren.
Anthropic’s safety brand is slipping from a “differentiation advantage” to a “political liability.”
It is walking a complex tightrope, needing enough “safety” to maintain its brand, but also enough “flexibility” to avoid being abandoned by markets and governments. The problem is, both tolerances are shrinking.
How Much Is the Safety Narrative Worth?
Stacking these three issues together makes the picture clear.
Accusing Chinese companies of distilling Claude is to reinforce the narrative for chip export controls. Removing safety pause commitments to stay in the arms race. Refusing Pentagon’s autonomous weapons demands to preserve the last moral veneer.
Every step makes sense logically, but they contradict each other.
You can’t claim Chinese companies’ “distillation” endangers national security and simultaneously remove your own safety commitments to prevent your models from going out of control. If models are truly so dangerous, you should be more cautious, not more aggressive.
Unless you’re Anthropic.
In the AI industry, identity isn’t defined by your statements but by your balance sheet. Anthropic’s “safety” narrative is essentially a brand premium.
In the early AI arms race, this premium was worth money. Investors paid higher valuations for “responsible AI,” governments greenlit “trustworthy AI,” and customers paid for “safer AI.”
But by 2026, this premium is evaporating.
Anthropic now faces not a “whether to compromise” dilemma but a “whom to compromise first” ranking. Compromise with the Pentagon damages its brand. Compromise with competitors nullifies safety commitments. Compromise with investors means losing on both ends.
By 5:01 PM Friday, Anthropic will deliver its answer.
But whatever the answer, one thing is certain: the Anthropic that once thrived on “we’re different from OpenAI” is becoming just like everyone else.
The end of an identity crisis often means the disappearance of identity itself.