OpenAI's Pentagon deal was announced just hours after the federal ban on Anthropic.
When and how was it announced? OpenAI CEO Sam Altman announced Friday night that the company had signed a deal with the US Department of Defense . This happened on the same day that Trump banned Anthropic for federal agencies, declaring it a "national security supply chain risk." Scope of the agreement: OpenAI's GPT models can be used in classified/classified military networks. The contract value is estimated to be around $200 million. OpenAI offers its models through its own cloud API. Security and Ethical "Guardrails" OpenAI emphasizes that it maintains the same two major restrictions that Anthropic rejected: Domestic mass surveillance is prohibited. Human oversight is mandatory for fully autonomous weapons systems; models cannot be freely used for "every legitimate purpose." OpenAI claims to offer "stronger guardrails": a combination of legal + technical + process + human factors . Altman also added that the "supply chain risk" label applied to Anthropic was unfair. Why so fast? Apparently, the Pentagon had stalled months of negotiations with Anthropic . Immediately after Anthropic was excluded by order of Trump, OpenAI stepped in and closed the deal by getting similar restrictions accepted . Some commentators see it as a "competitor advantage" and "pragmatism" – while the Trump administration criticized Anthropic as "leftwing," OpenAI was found to be more flexible/collaborative. Reactions and comments. Some say, "The Pentagon rejected Anthropic's terms but accepted OpenAI's, an interesting contradiction." There are criticisms: "Not a real guardrail, just PR" or "human oversight is weak, tied to the Pentagon's own policy." OpenAI's own blog post shares details; it requests that similar agreements be offered to all AI companies. In short: Anthropic resisted on principle and lost, OpenAI won by maintaining the same (or similar) principles, taking a big step in the military AI market. This event has further intensified the government's stance against companies demanding "safety" and the competition among AI companies. #TrumpordersfederalbanonAnthropicAI
User_any
#TrumpordersfederalbanonAnthropicAI US President Donald Trump has banned Anthropic's artificial intelligence technologies from all federal government agencies. In a statement on his Truth Social account, Trump described Anthropic as a "radical leftist, shrewd company" and, following a dispute with the Pentagon, said, "Stop all use of it immediately, we don't need it, we don't want it." The Pentagon declared Anthropic a "national security risk" after the company attempted to restrict its AI models for military purposes (specifically, unrestricted use), giving federal agencies a six-month transition period. This decision led to Anthropic losing government contracts; rival companies (such as OpenAI) announced new agreements with the Pentagon.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
OpenAI's Pentagon deal was announced just hours after the federal ban on Anthropic.
When and how was it announced?
OpenAI CEO Sam Altman announced Friday night that the company had signed a deal with the US Department of Defense . This happened on the same day that Trump banned Anthropic for federal agencies, declaring it a "national security supply chain risk."
Scope of the agreement:
OpenAI's GPT models can be used in classified/classified military networks.
The contract value is estimated to be around $200 million.
OpenAI offers its models through its own cloud API.
Security and Ethical "Guardrails"
OpenAI emphasizes that it maintains the same two major restrictions that Anthropic rejected:
Domestic mass surveillance is prohibited.
Human oversight is mandatory for fully autonomous weapons systems; models cannot be freely used for "every legitimate purpose."
OpenAI claims to offer "stronger guardrails": a combination of legal + technical + process + human factors . Altman also added that the "supply chain risk" label applied to Anthropic was unfair. Why so fast?
Apparently, the Pentagon had stalled months of negotiations with Anthropic . Immediately after Anthropic was excluded by order of Trump, OpenAI stepped in and closed the deal by getting similar restrictions accepted . Some commentators see it as a "competitor advantage" and "pragmatism" – while the Trump administration criticized Anthropic as "leftwing," OpenAI was found to be more flexible/collaborative.
Reactions and comments.
Some say, "The Pentagon rejected Anthropic's terms but accepted OpenAI's, an interesting contradiction."
There are criticisms: "Not a real guardrail, just PR" or "human oversight is weak, tied to the Pentagon's own policy."
OpenAI's own blog post shares details; it requests that similar agreements be offered to all AI companies.
In short: Anthropic resisted on principle and lost, OpenAI won by maintaining the same (or similar) principles, taking a big step in the military AI market. This event has further intensified the government's stance against companies demanding "safety" and the competition among AI companies.
#TrumpordersfederalbanonAnthropicAI
US President Donald Trump has banned Anthropic's artificial intelligence technologies from all federal government agencies. In a statement on his Truth Social account, Trump described Anthropic as a "radical leftist, shrewd company" and, following a dispute with the Pentagon, said, "Stop all use of it immediately, we don't need it, we don't want it."
The Pentagon declared Anthropic a "national security risk" after the company attempted to restrict its AI models for military purposes (specifically, unrestricted use), giving federal agencies a six-month transition period. This decision led to Anthropic losing government contracts; rival companies (such as OpenAI) announced new agreements with the Pentagon.