Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Artemis: 2030 Machine Economy Outlook
Author: Lucas Shin, Source: Artemis, Compiled by: Shaw Golden Finance
Overview
By 2030, intelligent agents (AI Agents) will become the primary way people use the internet.
A brand-new agentic network will require new payment rails, a new monetary system, and new foundational components.
Value will concentrate across three major layers: the interface layer, the entity that controls user interactions; the payment layer, the entity that inserts into capital flows; and the compute & custody layer, the entity that operates the underlying infrastructure
Long-tail end intelligent agent business activities will run on open protocols.
Let’s first paint a scenario.
The time is 2030. You’re 24, living in Burlington, Vermont, and you love investing—mostly allocating to U.S. stocks, and also trading a bit of crypto and prediction markets on Kalshi. Two months ago, you started a fintech consulting company as a side job.
On some days, like today, the opening always comes out of nowhere.
Buzz—
The phone’s ringing jolts you awake, like a bucket of cold water splashed across your face. It’s a message from your private intelligent agent, Nexus:
What exactly happened while you were asleep?
Nexus dispatched a research sub-agent, spending $0.24, and overnight pulled information from 40 different data providers. It compared Walmart’s latest earnings call transcript with satellite images of parking lots at Walmart stores across the U.S., updating your investment logic. When the satellite data showed Walmart’s customer traffic was declining, your portfolio agent cross-checked Kalshi’s earnings-sentiment markets to confirm the bearish signal, and finished reducing the position before you woke up. Four years ago, this kind of trading strategy was still the exclusive domain of Citadel (and a small number of quant funds). They had to pay millions of dollars for satellite image subscriptions. Even a Bloomberg terminal costing $30,000 per year couldn’t cover all the information—you’d still have to separately subscribe to satellite imagery, alternative data, and spend hours integrating and analyzing it. And now, a 24-year-old in Vermont can get the same information edge as a Citadel quant analyst at a cost of less than a cup of coffee.
Nexus’s sales sub-agent filtered 200 leads that matched your target customer profile—U.S. fintech companies in the Southeast that are Series B and beyond, and haven’t used data service providers—and completed information enrichment at a cost of $0.002 per lead. The APIs used were developed and listed on the open market by another agent. It selected the top 3 leads with the highest intent signals and immediately contacted their scheduling agents to negotiate meeting times. Before each meeting, it pulled the prospective customer’s graduating university, shared connections, company news, and funding history, and compiled a one-page brief for you—pinned into the meeting notes. Just the lead-information enrichment alone: if done via a SaaS subscription, each account costs $200 per month.
Nexus’s operations sub-agent ran comparative tests between your consulting website and 6 server providers: Vercel, Render, Railway, Fly.io, Netlify, and Cloudflare. It invoked each provider’s trial API interfaces at extremely low cost, deployed a test environment, and measured latency, availability, and throughput. In the end, Railway delivered equivalent performance at one-third the cost. Nexus negotiated the monthly fee via Railway’s pricing agent, set up a website mirror on the new servers, and completed the full test suite to ensure everything runs correctly. Without agents, this would take at least a week: searching online, requesting quotes, and also going through an anxiety-inducing manual migration. All you need to do is confirm execution to Nexus.
Your agent completed all of that for just $0.67.
Now, multiply this scenario across every knowledge worker worldwide, every business, and every intelligent agent that’s running.
Buzz—
As you did last week, you top up $5 using your credit card bound through Apple Pay, and then continue brushing your teeth. At the underlying layer, those $5 get converted into stablecoins from your credit card—yet you can’t see your wallet at all. You don’t need to consider deposits, and you never need to touch the blockchain.
This is a glimpse of the machine economy—an entirely new business scenario where AI agents will continuously spend money on things humans have never paid for, with transaction scale and speed far beyond the bounds of human commerce. You can imagine billions of transactions happening every day.
But today’s internet isn’t ready to support all of this.
The current internet is designed for humans. It filters non-human behavior through rate limits, CAPTCHAs, and API keys, and monetizes human users through advertising. However, as large numbers of autonomous agents appear, this business model will fail completely.
Traffic surges, while effective attention collapses.
Web servers subsidized for the long term by ad revenue will face an order-of-magnitude increase in requests—and those requests will never be influenced by ads.
Agent payments naturally solve this problem. Micro-payments will become the key to access.
Pay to scrape, pay to access, pay to use.
Companies that build the infrastructure ultimately adopted widely by agents will capture the largest new economic pool our generation will ever see. The existing giants are already fighting for their spots, but the machine economy will also spawn its own new giants. The last wave of “new internet” created Google, Amazon, Facebook, PayPal, and Salesforce.
The agentic internet era is coming.
Market size outlook
By 2030, most network interactions will no longer be handled through browsers. Our intelligent agents will browse on your behalf, run tests, negotiate terms, assemble sub-agent teams, and execute transactions. Every task they complete will generate a chain of small payments. Those per-use costs may look like new spending, but in reality they’re replacing tools and labor that cost far more. The more advanced the tools are, the better the agents perform—and we will grant them even higher levels of autonomy.
Demand and adoption speed
Let’s do a rough estimate.
In the earlier example, Joe’s agent completed hundreds of transactions for just $0.67. If you scale that to a 500-person mid-sized company—where each employee has a personal agent, plus hundreds of shared agents across sales, finance, legal, operations, and other departments—then you can easily generate 100,000 transactions per day initiated by agents.
There are over 1 billion knowledge workers globally, and 88% already use AI at work. The demand-side population is enormous and continuing to grow. But today, most of these uses are limited to basic tasks, like web search, document summarization, or writing emails. The transformation to fully agentic workflows hasn’t arrived yet—but once it starts, the speed will be astonishing.
Instagram reached 100 million users in 30 months, TikTok in 9 months, and ChatGPT in only 2 months (Reuters / UBS data). One reason ChatGPT spread so quickly is that the conversational interface is already familiar to everyone, and there’s no need to learn new software or change usage habits—you just describe what you need, and the agent will find a way to get it done.
The only obstacle is trust, and the speed at which trust is built far exceeds people’s expectations. Currently, Claude Code has contributed 4% of all public code commits on GitHub (over 135,000 times per day). At the current growth rate, by end of 2026 it’s expected to surpass 20%. That means a 42,896x increase achieved in 13 months. Developers shifted from skepticism to handing production-grade code to AI in just a little over a year.
As models become smarter, interfaces become more streamlined, and more technical complexity is abstracted away, I believe the adoption speed of intelligent agents will accelerate even further.
By 2030, even if only 60% of knowledge workers use agents, average daily spend will be $3 to $5 (and that’s already a conservative estimate—because Joe completed three tasks before breakfast for only $0.67). The annual transaction volume for agents on the personal end alone will reach $800 billion to $1.4 trillion.
Enterprise market
Robbie Peterson from Dragonfly notes in his article that commercial intelligent agents are a reasonable evolution toward the SaaS model. I completely agree. They’re no longer just augmenting workflows—they will replace existing workflows entirely. As today more than 95% of software spending comes from enterprises and government institutions, the usage and spending scale of intelligent agents on the enterprise side will very likely far exceed the personal market.
We’re already witnessing this transformation. Klarna replaced Salesforce with an internal AI system, saving roughly $2 million. ZoomInfo built AI agents to replace its transaction approval department, saving over $1 million per year. These are just early cases where single workflows were agentified, saving millions in cost. Every enterprise has hundreds of such workflows across sales, finance, legal, operations, and R&D. Once intelligent agents are deployed company-wide, the scale of related spending will be staggering.
Anyone can become a merchant
As code agents dramatically reduce development costs, the entry barrier for internet merchants is moving toward zero. A wedding planner who’s good at venue selection can package and sell the best workflow. An independent developer in Lagos can build a vertical-domain API and start earning from the agents around the world within hours. All you need is domain expertise: generate an API interface via prompts, and you can start getting paid.
But what happens when agents start selling services to other agents?
Suppose the Joe mentioned earlier wants to enter a new domain: a mid-sized healthcare business in the U.S. Midwest with legacy payment infrastructure. If his agent reasons from zero and completes everything, token costs would accumulate quickly:
Filter 200 companies matching a specific profile (inference + API calls): about 500,000 tokens
Enrich each lead’s information (tech stack, funding, hiring data): 200 leads × about 5,000 tokens = 1,000,000 tokens
Lock in decision-makers for core customers: about 200,000 tokens
Score intent signals (hiring cadence, contract cycle): about 300,000 tokens
Research each decision-maker’s background: 20 leads × about 10,000 tokens = 200,000 tokens
Write personalized outreach copy: 20 leads × about 3,000 tokens = 60,000 tokens
Total: about 2.3 million tokens. Using a cutting-edge model like Opus 4.6, the cost is between $8 and $15.
Wait—didn’t Joe’s sales sub-agent do a similar process for just a few cents?
Yes. Because most steps had already been solved by other agents. Lead enrichment, intent scoring, and scheduling all have packaged interfaces on the open market, priced at just a few tenths of a cent.
This model creates an entirely new business scenario. On the supply side, growth becomes bidirectional: humans build services, while agents also build services. A high-token-cost problem solved by one agent can become a cheap tool that all future agents use. In such a world, agents can turn their experience into workflows and sell them to other agents, subsidizing their own operating costs.
Every paradigm shift creates new merchants. Shopify empowered e-commerce sellers, Stripe empowered online businesses—and in the machine economy, it will empower impromptu developers and autonomous intelligent agents.
A reality check
So how far are we from truly commercial agentic transactions?
In my team at Artemis, we’ve been tracking the progress of two mainstream agent payment protocol families: Coinbase’s open x402 protocol, and the machine payment protocol (MPP) jointly launched by Stripe and Tempo. In simple terms, the goals of these two kinds of protocols are identical: allow users or agents to pay any network service (e.g., data, web scraping, model inference, or other API services) within a single network request—eliminating tedious steps like registering accounts, managing API keys, and billing settlement.
We’re still in the early stage.
By the end of 2025, x402 protocol transaction volume is inflated by meme coin hype and leaderboard-volume farming behavior. The chart above shows the “real” transaction activity after filtering out fake transactions using a proprietary algorithm. Once you strip away the noise from fake transactions and meme-coin hype, it’s clear that the agent economy hasn’t truly arrived yet. Most of the current activity is developers testing paid APIs and AI tools, not actual agent economy entities operating.
Before this model truly takes off, two core problems must be solved:
The supply side hasn’t formed: there aren’t nearly enough practical API interfaces that can create genuine willingness to pay from agents.
A mature discovery and aggregation layer is missing: even if high-value interfaces exist, agents currently have no reliable way to find them.
Because the ecosystem is still developing, using transaction volume as the primary metric is premature. A more reasonable observation metric is supply-side growth—i.e., the number of merchants providing services to agents. We’ll collectively call these merchants service providers.
The figure above shows the cumulative change over time in the number of service providers (sellers) that meet the standard. To qualify, a service provider must have completed more than two “real” transactions and have at least two independent buyers. In October of last year, the number was still below 100; now it’s over 4,000. I expect this growth rate will accelerate further, driven mainly by three trends:
AI is lowering the barrier to creating digital products (as described earlier). That means more people and AI agents will become merchants.
New services will be designed with “agents first” as the guiding principle. Agents are becoming the core customers, and the product forms built for them will be completely different: use APIs instead of webpages, use instant access instead of registration flows, and use pay-as-you-go instead of subscription models.
Existing service providers will be forced to transform. As more users interact through AI interfaces rather than manually browsing webpages, the ad-based monetization model will fail completely—because there will be no human user attention to monetize. Enterprises will have no choice but to charge directly for content and services.
These forces will form a positive feedback loop: supply and demand amplify each other at both ends, ultimately igniting the entire agent economy.
Industry landscape
The architecture of the agent transaction ecosystem is taking shape rapidly. Large numbers of startups are emerging like mushrooms after rain, each focusing on closing a different gap in that architecture. At the same time, growth-stage companies in fintech and SaaS are also shifting toward native agent transactions. Over the past twelve months, almost all major payment giants and AI labs have launched or announced protocols related to agent transactions.
We’ve mapped more than 170 companies across five major layers: interaction interfaces, intelligent agents, account systems, payment infrastructure, and AI engines. Here, we’ve streamlined it to about 80 core institutions:
We break it down layer by layer, from top to bottom.
Interface layer
The interface layer is closest to the user. It’s responsible for directing user intent (needs) toward the required tools or services (supply). Whoever defines how intelligent agents discover, evaluate, and choose services will have enormous control over all the lower layers. We’ll focus on the two most important categories within this layer:
User interfaces
This is the entry point where most people directly interact with intelligent agents. Apple, Google, OpenAI, Anthropic, xAI, and Perplexity are all building these interaction interfaces, and their forms are rapidly moving beyond just a chat mode. New formats—voice assistants, desktop assistants, embedded copilots, browser agents—are continuously emerging, aligning with real usage scenarios. The platform that becomes the default AI interface for users will be the starting point for all agent-initiated transactions. The winner in this track will gain an additional major advantage.
AI labs have already crawled and trained on the entire internet’s data. What remains as the highest-quality training data is human-guided feedback. Every time you accept or reject a response, make corrections, or provide preference information to Claude or ChatGPT, the interaction interface you use captures this data for resale or for model training. Controlling the interface is equivalent to controlling the feedback loop that optimizes user experience and the model itself. This is also why Anthropic launched Claude Code, why Google acquired Windsurf, and why OpenAI is trying to acquire Cursor. Once your agent accumulates contextual information about your preferences, workflows, and commonly used tools, your migration costs become extremely high.
Service discovery
When Joe’s agent needs a lead-enrichment interface or a satellite data provider, how does it find the right service? This may be the biggest unresolved problem in the entire ecosystem architecture. Most current solutions are either hard-coded tool lists or curated service marketplaces. Major platforms are building their own systems: OpenAI and Stripe have launched ACP, Google and Shopify have launched UCP, and Visa has launched TAP. Fundamentally, these are merchant directories that only work if both the platform and merchants proactively integrate. This model performs well in typical scenarios, but as the barriers to creating and selling digital services drop significantly, many niche, highly customized applications will emerge—and curated models can’t satisfy these long-tail needs.
Companies such as Coinbase, Merit Systems, Orthogonal, and Sapiom are building open alternatives. They’re creating aggregators and underlying infrastructure so that agents can autonomously search for and pay for services at runtime, without prior integration or commercial agreements. As the supply side (i.e., network resources) grows exponentially, solving this problem becomes extremely difficult. But whoever can crack ordering and recommendation systems—so that agents match to the right services at the right time—will hold massive industry influence.
In the end, will agent transactions move toward curated closed modes or open-ecosystem modes, and how this landscape determines value allocation—this is one of the most core debates in the field. We’ll explore this topic in depth later.
Intelligent agents and account layer
To accomplish tasks for us, intelligent agents having intelligence alone is far from enough. Joe’s sales sub-agent handled the entire flow: filtering 200 leads, information enrichment, and scheduling three meetings—while Joe didn’t need to configure any tools, manage API keys, or approve every step one by one. Most of the underlying infrastructure that makes this possible is invisible to end users; but without these infrastructures, an agent is just a large language model without execution capability. Here’s an overview of the core components needed to make all of this happen:
Tools and standards
These protocols and frameworks give intelligent agents the ability to interact with the external world. MCP (Model Context Protocol, initiated by Anthropic and currently managed by the Linux Foundation) allows agents to connect to external data and tools: calling previously unexposed APIs, reading databases, or calling a service instantly. A2A (proposed by Google) defines how agents built on different platforms can discover each other and collaborate. The frameworks released by LangChain, Nvidia, and Cloudflare provide developers with foundational building blocks to create and deploy agents on top of these protocols. OpenClaw, recently acquired by OpenAI, bundles context management and tool calling into a single localized-first framework, greatly reducing the difficulty for developers to build agents that can autonomously discover and pay for services.
The core challenge in this area is: will these standards ultimately converge into one, or will they fragment? And before tools become commoditized, can commercial frameworks built on top of these standards capture value?
Identity authentication
After agents can communicate, trust must be established. Before an agent can execute transactions or sell services, it must prove its authorized entity and operating permissions, and keep a behavior record that other agents can verify.
There are currently many technical paths, including: biometric authentication (Worldcoin, Civic), on-chain agent reputation systems (ERC-8004), and verifiable credentials (Dock, Reclaim).
This area has wide design space and extremely high risk: how much money can your agent spend before it receives your approval? Can it sign contracts on your behalf? Can it delegate permissions to sub-agents? These kinds of rules and security boundaries will most likely be finalized at the account layer.
Wallets
Clearly, agents need wallets in order to make payments. Many providers—Coinbase, Safe, MetaMask, Phantom, MoonPay, Privy, and others—are building in this area, offering features including programmatic access and creation, permission delegation, per-transaction spend limits, whitelisted payees, and the ability to run across multiple chains without requiring the user to manually confirm each operation. This is one of the most fiercely competitive tracks in the entire ecosystem, and it also raises a key question: where exactly is a company’s moat? Will this area ultimately become commoditized?
Payment layer
The payment layer sits deeper in the architecture and should be invisible to end users. But in the machine economy, every unit of money will flow through here. When Joe’s agent pays $0.24 overnight to pull data from 40 service providers, he doesn’t need to choose which card network, currency, or settlement chain for every single transaction.
The key difficulty is that traditional payment rails are designed for humans to click a “Buy” button—not for agent API calls that happen thousands of times per minute, with per-call amounts below a cent. Card network transactions incur an approximately $0.03–$0.04 fixed cost per transaction, plus 2.3%–2.9% in fees. That works for a $400 hotel order, but it completely fails to fit new multi-step agent transactions.
This has given rise to a batch of brand-new protocols and monetary systems designed specifically for agent transactions, while traditional giants are also modifying existing infrastructure to accommodate these needs.
Key points are as follows:
Payment rails
These protocols and standards define how intelligent agents initiate, route, and complete payment settlement. Currently, two main technical routes are taking shape:
x402 (Coinbase/Cloudflare) and MPP (Stripe/Tempo) are designed for machine-native transactions: the agent calls interfaces, gets quotes, signs payments, and receives data—all completed within a single HTTP request, settled with stablecoins, and with per-transaction costs of only a few tenths of a cent.
ACP (OpenAI/Stripe), AP2 (Google/PayPal), and Visa’s TAP take another approach: they modify existing card payment infrastructure to adapt to agent scenarios. These solutions are more suitable for high-value transactions. Compared with settlement speed and cost, buyer protections and merchant acceptance coverage matter more.
Stablecoins and settlement
Intelligent agents need programmable, fast, low-cost, globalizable money. Stablecoins perfectly meet these requirements, so they become the natural choice for x402 and MPP transactions. At the same time, card payment rails still provide buyer protections, and merchants’ usage habits are mature—still important for high-value transactions. Underlying blockchains (e.g., Base, Solana, Tempo) introduce another crucial issue: which chains can support the processing throughput, transaction finality, and cost structure required for agent-scale, high-volume transactions?
Service providers
These institutions act as intermediaries between intelligent agents and merchants, handling complex steps like compliance checks, merchant onboarding, and permission authentication. Coinbase, Stripe, and PayPal are expanding their existing ecosystems to support agent transactions. They’re betting that their merchant networks and compliance infrastructure can form a competitive advantage. Other institutions such as Sponge and Sapiom start from the emerging-merchant side, solving the cold-start problem so that any API-based business can easily begin accepting agent payments. As payment rails, protocols, and the number of merchants continue to grow, coordinators are expected to become the key connective tissue that prevents the whole system from fragmenting.
AI engine layer
This layer doesn’t need much introduction: all agent interactions, reasoning steps, and tool calls are driven by it. But the pace of business-model change for this layer is far faster than for other parts of the architecture, and where value ultimately flows isn’t as clear as it seems on the surface. We focus on two major categories:
Compute and custody
Every time Joe’s intelligent agent performs inference on a task, calls tools, or creates sub-agents, it consumes compute. But inference is only part of it. With the explosive growth of low-code / rapid development applications and agents building their own services, a large number of new interfaces are emerging, all of which need hosting substrates. As of May 2025, the number of accessible webpages grew by 45% in just two years. And as code agents make launching new services extremely easy, this growth rate will only accelerate. This means compute demand is rising from both ends simultaneously: on the one hand more agents are processing more tasks; on the other hand more services are continuously launching to meet their needs.
Hyperscale cloud providers (AWS, Google Cloud, Nvidia) are obviously the core participants. Among them, AWS and Google Cloud are continuously simplifying the deployment processes for agent backends and APIs on their infrastructures. Cloudflare focuses on edge computing, providing low-latency serverless compute for agent-facing services. Decentralized compute platforms like Akash, Bittensor, Nous, and others meet the demand for excess compute by aggregating global GPU resources and selling them at very low prices.
Foundation models
Foundation models are the “brains” of the whole system. Anthropic, OpenAI, Google, and Meta—the frontier labs—keep expanding the boundaries of intelligent agent capabilities, while the cost of running these models is dropping quickly. At the end of 2022, the cost of running a GPT-4-level model was about $20 per million tokens; by early 2026, a model with comparable performance costs about $0.05 per million tokens. In just a little over three years, that’s a 600x reduction. Hardware upgrades, competition among vendors, and optimization techniques such as prompt caching and batching all work together to keep lowering inference costs. Meanwhile, as inference logic gets distilled into smaller open-weight models and run costs become extremely low, the cost of building intelligence is also dropping significantly. In some benchmarks, the performance gap between open-weight models and closed-weight models has narrowed to only 1.7%.
This is a major tailwind for the machine economy.
Cheaper intelligence means cheaper agents, which allows even a 24-year-old solo founder in Vermont to comfortably afford operating costs—thereby pushing transaction activity across the ecosystem’s higher layers even further. If foundation models get stuck in price competition like today’s cloud providers, then value may ultimately concentrate in the upstream and downstream layers around models, rather than in the models themselves.
Who will be the winner?
By 2030, most of your digital interactions won’t need browsers, search engines, or app stores anymore. You simply state your needs, and intelligent agents will handle everything end-to-end: find the right services, negotiate terms, complete payments, and deliver the final results. The internet will look completely different.
Think of it as: the search-engine-optimization era for agents. There will be more and more API interfaces, and far fewer human-facing interaction interfaces.
In such a world, who captures the value?
Sam Lageserdale of Merit Systems wrote about comparing today’s agent transaction ecosystem to the early internet. He argues that the curated agent service marketplaces built by major platforms (ACP, UCP, TAP) are taking the same old road as AOL in the U.S. online scene of the 1990s—polished experience, closed systems, but with the core limitation that all service providers must go through manual selection and review. Open protocols like x402 and MPP are more rough-edged, but they’re permissionless: anyone can build interfaces, without a business team or legal review, and earn revenue through agents. In the 1990s, closed-garden product experiences were better, but the open internet has infinite possibilities.
Ultimately, the open internet wins.
The same logic is playing out again. ACP, UCP, and TAP will connect with top AI labs and work well for mainstream scenarios, but for agents restricted to pre-approved service-provider directories, they can only complete tasks predesigned by the platforms. Agents that can connect to the entire open protocol ecosystem have far broader capability boundaries.
After all, the most vibrant part of today’s internet comes from the massive long-tail traffic to open websites enabled by the HTTP protocol.
We must humbly admit that we can’t imagine the full scope of an open agent internet. Just like in 1995 no one could predict ride-hailing or social media, once we provide agents with the tools they need, we can’t predict what they will create, which services people will pay for, or how they will evolve.
As we discussed earlier, foundation models are rapidly converging. Value may shift toward other layers in the technical architecture. Developer tools, wallets, and identity infrastructure are critical—but as standards unify, these areas are also likely to commoditize. Therefore, I believe value will concentrate in three major areas: interaction interfaces, payments, and compute.
Interaction interfaces
Interaction interfaces determine spending limits, approval flows, and trust delegation mechanisms. The platform that can deliver the most personalized experience for users will carry the most transaction traffic.
Apple is the most underrated participant in this area. Its devices are deeply embedded in people’s daily lives, and user migration costs are extremely high. If Siri evolves into a mature agent interaction entry point, Apple doesn’t need the absolute top-tier model to control the starting point for billions of transactions. They only need to maintain the highest-quality interaction entry point.
Google’s transition is even tougher. Moving from humans manually browsing to agents intelligently filtering will erode its core advertising revenue. But Google has advantages that no other company can match: it has amassed decades of personal data in search, email, calendars, maps, and documents. You also have to consider enterprise migration costs—Google Workspace is embedded in millions of enterprises, and employees’ email, files, and workflows run on Google infrastructure. If there’s any company that can build the most personalized agents for both consumers and enterprises, it’s Google. The question is whether it can monetize agent services as efficiently as it monetizes search traffic.
Merit Systems is my dark horse pick. They’re building service discovery infrastructure for the open agent economy (AgentCash, x402 scanning, MPP scanning) and also developing consumer-side interfaces (Poncho). The core logic is: whoever controls an agent’s service discovery channels and gets involved in the flow of funds will occupy the position Google held in the early internet. It’s an ambitious bet—but if open agent transactions beat curated closed modes, Merit will become the most advantageous aggregation layer. Right now, it’s still early stage, just like the AOL closed ecosystem competition that Google faced when Google was competing back then—until today, valued at roughly $350 billion.
Payments
Who controls the flow of funds captures a cut of every transaction. I’m most confident about this layer because its scale will grow in direct proportion to transaction volume.
Stripe and Tempo have the strongest position in machine-native payments. Stripe already has a mature developer ecosystem and a massive merchant network. Tempo, meanwhile, has streaming payments, ~500-millisecond transaction finality, streaming payment rails, native support for both cards and stablecoins, paying Gas fees in USD (without token volatility risk), server-mediated settlement transactions, and more—built specifically for massive transaction volumes in the machine economy. If MPP becomes the default machine-native payment rail, Stripe and Tempo will take a share from every agent-initiated transaction.
Circle will grow in sync as the agent economy expands. I firmly believe stablecoins will become the settlement layer of the machine economy. Then Circle will earn a share from every dollar’s funds in agent wallets through reserve yield. USDC is the stablecoin with the widest acceptance across exchanges, wallets, public chains, and payment protocols. New developers will prioritize it, further deepening ecosystem integration and making it harder for competitors to enter.
Visa will adapt. Remember when Joe topped up via Apple Pay with a credit card, and underneath it automatically converted into stablecoins—while he couldn’t see his wallet at all and didn’t need to worry about the blockchain. That’s the future norm. Consumers will keep using familiar card rails, while settlement is handled underneath by stablecoins. As payment rails upgrade, Visa will leverage its brand trust with consumers and merchants to secure its footing.
Compute and custody
Growth in the number of agents means rising inference demand. More impromptu services means higher hosting needs. Whichever model, protocol, or interface becomes dominant, compute providers will benefit. AWS and Cloudflare are the two companies with the strongest advantage in this area, for similar reasons.
First, they already support most of the internet’s traffic. AWS holds about 30% of cloud infrastructure share across 37 regions worldwide. Cloudflare provides security and performance services for over 20% of websites, meaning requests to those websites will pass through its network. When new interfaces for agents surge, developers will default to the deployment platform they already know.
Second, they’re building monetization infrastructure for the next-generation internet. As ad-based models fade and paid access models rise, both companies inherently support this shift natively. Cloudflare has launched a paid scraping service, allowing any website on its network to charge AI scrapers via x402 (Stack Overflow is already using it). And AWS is a founding member of the x402 fund, releasing an open-source serverless x402 reference architecture. Any service running on the two major platforms can easily enable native agent monetization.
Identity authentication
I’m pessimistic about companies like Worldcoin. The system they build requires human verification for every interaction. This ultra-maximalist vision assumes people care whether the online interaction partner is a human or an agent—but we’re already accustomed to this. In my view, a more likely future is: the basis for filtering most internet traffic will be micro-payments, not human identity credentials.
Paywalled access will be more useful than “proving you’re human.”
Identity systems matter only for a subset of high-risk interactions, but in the vast majority of agent transactions, (micro) payments themselves are the trust credential.
Conclusion
When Joe wakes up, he doesn’t think about payment rails or agent identity protocols. He just looks at his phone and learns that the agent has completed transactions, booked meetings, and found cheaper servers. Every technical architecture layer discussed in this article has been perfectly abstracted away, and he doesn’t need to worry at all.
We’re still moving toward this future. Relevant protocols are live but not widely adopted yet; the supply side is growing but still thin; the service discovery problem hasn’t been solved; and the identity layer is highly fragmented. Most transactions today are still developers testing, not real agent transactions. But the speed at which the ecosystem puzzle gets completed is faster than what data metrics can show. People who are currently bearish on early infrastructure only focus on the downward curve; but what I’m thinking about is what this picture will look like when everyone has one—or a group—of agents that genuinely have economic agency.
If you haven’t acted yet, it’s time to transition to the agent economy model.