The "Ice and Fire" of the AI world: Anthropic was just banned, and OpenAI has already secured a deal with the Pentagon

robot
Abstract generation in progress

OpenAI CEO Sam Altman announced on Friday (February 27) that the company has reached an agreement with the U.S. Department of Defense to deploy its AI models within their classified networks. Earlier that day, President Trump added OpenAI’s competitor, Anthropic, to the “blacklist,” instructing federal agencies and military contractors to cease all dealings with Anthropic.

On Friday, Altman posted on social media platform X: “Tonight, we have reached an agreement with the Department of Defense to deploy our models on their secure networks. Throughout our discussions, the Department has shown a high level of security awareness and is eager to work with us to achieve the best results.”

The AI industry has just experienced a dramatic week, centered around a fierce week-long clash between Anthropic and the U.S. Department of Defense. Altman’s post can be seen as the conclusion of this dramatic event.

The Department of Defense chooses OpenAI

Anthropic was the first company to deploy its models on the Department of Defense’s classified networks. Previously, the company had been negotiating contract terms with the agency, but the negotiations ultimately broke down.

Anthropic sought assurances from the Department that its models would not be used in fully autonomous weapons systems or for large-scale surveillance of U.S. citizens. The Department, on the other hand, wanted Anthropic to agree to allow military use of these models for all lawful purposes.

Earlier on Friday, after failed negotiations with Anthropic, U.S. Secretary of Defense Lloyd Austin designated Anthropic as a “supply chain risk to national security.” Notably, this label is usually reserved for foreign competitors, and this designation will prevent Department of Defense suppliers and contractors from using Anthropic’s models.

President Trump also directed all federal agencies within the U.S. to “immediately cease” using Anthropic’s technology.

Strangely, Altman informed employees in a memo on Thursday (February 26) that OpenAI has the same “red line” policies as Anthropic. He also stated in his Friday post that the Department of Defense has agreed to abide by these restrictions.

Altman wrote, “Our two most important safety principles are prohibiting large-scale domestic surveillance and ensuring human accountability when using force.”

It remains unclear why the U.S. Department of Defense chose to work with OpenAI instead of accepting Anthropic. However, in recent months, U.S. officials have criticized Anthropic, claiming it appears overly focused on AI safety.

Altman also mentioned that OpenAI will establish “technical safeguards to ensure its models operate properly,” and that the company will send personnel “to assist with model deployment and ensure safety.”

(Source: Cailian Press)

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin

Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)