Behind OpenAI's $110 billion funding is a competition between Amazon and Microsoft

robot
Abstract generation in progress

Original | Odaily Planet Daily (@OdailyChina)

Author | Azuma (@azuma_eth)

On the evening of February 27, OpenAI announced it had completed a new funding round of $110 billion at a pre-money valuation of $730 billion.

The funds came from three major giants, with Amazon investing $50 billion (an initial investment of $15 billion, with the remaining $35 billion to be gradually funded over the coming months upon meeting certain conditions), Nvidia investing $30 billion (which will be used to purchase a total of 5 GW of computing power), and SoftBank also investing $30 billion.

After the funding was announced, OpenAI founder Sam Altman personally thanked the three investors on his X account. Notably, Sam Altman’s thank-you sequence was Amazon, Microsoft, Nvidia, and SoftBank — with Microsoft, a longstanding partner and early investor that did not contribute this round, being mentioned after Amazon, which committed the most funding.

Longtime AI track observer Aakash Gupta pointed out that, while most focus on the astronomical $110 billion figure, the most critical information in Sam Altman’s remarks involves two overlooked technical terms: “Stateless API” (stateless API) and “Stateful Runtime Environment” (stateful runtime environment), which are included in the plans of Microsoft and Amazon.

Behind these technical terms lie the present and future of AI

The core difference between Stateless API and Stateful Runtime Environment hinges on the words “Stateless” and “Stateful.”

“Stateless” in Stateless API means the server does not retain any persistent state across requests — each call performs a single inference. You ask a question, the AI responds, and once the request cycle ends, the system does not keep context or continue running. “Stateful” in Runtime Environment means a persistent execution environment — an agent with memory that can exist long-term, collaborate across tasks, and execute ongoing workflows.

Currently, Stateless API is the mainstream form of commercialized large language models (LLMs). Industries like finance, retail, manufacturing, and healthcare mainly embed AI into existing systems via this method (e.g., chat assistants, document summarization, search enhancement). The advantage is that companies can rapidly add AI capabilities without major organizational or process restructuring, enabling low-friction feature improvements within existing architectures. However, as model capabilities converge, computing costs decline, and price competition intensifies, token-based billing for Stateless API tends toward standardization and commoditization, with margins likely to be squeezed over time.

In contrast, the Stateful Runtime Environment, while still limited in commercial scale, signifies a paradigm shift — it’s not just about “functional optimization,” but about transforming business models. It can be viewed as digital labor capable of executing tasks concretely. This means its reach extends beyond simple API calls to include automation, workflow management, and even some human labor costs. As a result, market expectations for Stateful Runtime Environment are much higher than its current scale.

Aakash Gupta also noted that by 2026–2027, nearly all enterprise roadmaps will revolve around “autonomous agent workflows,” rather than one-off API calls. Companies heavily investing in AI will increasingly prefer systems that can run continuously, collaborate across tools, and maintain long-term context.

In simple terms, Stateless API represents the present, while Stateful Runtime Environment points to the future.

What are Microsoft and Amazon taking?

On the day of the funding announcement, Microsoft and Amazon separately announced new cooperation agreements with OpenAI.

Microsoft stated that the terms of their partnership, first announced in October 2025, remain unchanged (including OpenAI’s commitment to purchase $25 billion worth of Azure services). Azure remains the exclusive cloud provider for OpenAI’s Stateless API, and any Stateless API calls involving OpenAI models made through third parties (including Amazon) will be hosted on Azure; OpenAI’s first-party products, including Frontier, will continue to be hosted on Azure.

Amazon announced that AWS will collaborate with OpenAI to develop a Stateful Runtime Environment powered by OpenAI models, and will offer it via Amazon Bedrock to AWS customers, helping enterprises build generative AI applications and agents at production scale; AWS will also become the exclusive third-party cloud provider for OpenAI Frontier; the existing $38 billion multi-year partnership between OpenAI and Amazon will be expanded to $100 billion over 8 years, with OpenAI consuming 2 GW of Trainium compute capacity via AWS infrastructure to support Stateful Runtime Environment, Frontier, and other advanced workloads; OpenAI and Amazon will also develop customized models for Amazon’s customer-facing applications.

Comparing these two announcements makes the current landscape very clear.

Microsoft is locking in current traffic with a $250 billion agreement and exclusive service rights — whenever OpenAI’s Stateless API is called, Azure will bill in the background — regardless of who the customer is or through which channel, all traffic ultimately flows back to Azure. This guarantees highly predictable cash flow, but the profit margins for Stateless API are shrinking; call volumes may grow, but long-term profitability may not be stable.

On the other hand, Amazon’s $50 billion investment and $100 billion expansion deal secure the underlying infrastructure for the AI agent era. Once agents become core productivity tools for enterprises, the resources consumed — compute, storage, orchestration, workflow management, cross-tool collaboration — will be embedded in AWS’s environment.

One controls current cash flow, the other bets on future productivity structures.

OpenAI’s Dispersed Betting

Before the future truly arrives, no one can say who is right or wrong — but what is clear is that, under these two clearly defined, mutually exclusive cooperation agreements, OpenAI’s bargaining power is significantly increasing.

In recent years, OpenAI has been highly dependent on Microsoft’s cloud infrastructure. Microsoft owns 27% of OpenAI and controls its infrastructure. This binding provided early resource advantages but also tilted bargaining power toward Microsoft. With Amazon’s strong entry, a direct competition over OpenAI’s future service rights is inevitable.

For OpenAI, this is a typical dispersed betting strategy — not deeply binding itself to any single cloud provider, avoiding complete dependence on one partner’s growth, and using future business potential as leverage for better terms.

Neither Microsoft nor Amazon can afford to give up on OpenAI now. When both sides are unable to back down, bargaining power will naturally shift back to OpenAI.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin

Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)