a16z: Why the next billion AI users will access through trust networks

Author: Sakina Arsiwala, a16z Researcher; Source: a16z crypto; Translated by: Shaw Golden Finance

Lessons from YouTube: Content as a Geopolitical Weapon

Years ago, I served as the head of international search products at Google and then led YouTube’s international expansion, launching the product in 21 countries in just 14 months. What I did was not just product localization, but also building local content partnerships, finding ways through the minefields of laws, policies, and market access. Recently, I have also been responsible for community health management (trust and safety) at Twitch. Throughout my career, I have founded two startups.

The current landscape of artificial intelligence (AI) shows an astonishing similarity to the early growth phases of Google and YouTube. My career has made me realize a truth: globalization is not a product feature, but a geopolitical game. The most profound lesson is that channel promotion has never been purely a technical issue. Growth relies on local partners, cultural communicators, and trusted community opinion leaders to build bridges between global platforms and local users.

I experienced the GEMA copyright ban in Germany: a music copyright agency nearly excluded an entire country from YouTube’s pan-European promotion plan. I went through the blasphemy arrest warrant incident in Thailand: as YouTube’s external head, I faced the risk of arrest due to content deemed insulting to the Thai king on the platform, making it impossible for me to pass through the country. I witnessed Pakistan cut off the national internet to ban a single video. I also remember our office in India being attacked due to conflicts between global algorithms and local religious taboos.

What we really need to address has never just been about policies or infrastructure, but about trust barriers.

In every market, someone must first bear the cost of clarifying which content is safe, acceptable, and valuable for users to engage. This cost accumulates over time, eventually forming a trust tax: initially borne by a small group, then shared by everyone.

Today, the same contradictions are reappearing in the field of artificial intelligence, only with a more severe situation, a faster evolution, and more pronounced impacts. The U.S. federal government recently reached an impasse with Anthropic, sparking public debate; OpenAI is facing increasing scrutiny due to its public sector partnerships. We are witnessing a shift: user acceptance no longer solely depends on practicality, but the influence of ideology is deepening. In this environment, trust is fragile, and a seemingly small collapse of trust could lead to rapid and massive user attrition.

Google is doubling down on its deep trust strategy, leveraging the familiarity of users with the existing ecosystems of Workspace and Search to unlock markets, but the global landscape is becoming increasingly fragmented. The EU’s stringent regulatory red lines, China’s fierce AI development race, and the rising tide of AI nationalism have kept the world on high alert.

The lesson for 2026 is clear: institutional trust and cultural recognition are now inseparable from the product itself. Without trust as a foundation, it is impossible to build intelligent operating systems.

This is the sovereign barrier—a structural boundary where global AI clashes with local controls. From a product perspective, it manifests as a more direct form: the trust barrier.

The expansion of all global AI systems will ultimately hit this wall. At this critical point, user acceptance will no longer depend on technical capability, but on whether users, institutions, and governments trust it in their own context.

The internet used to be borderless. AI will not be.

The End of the Age of Explorers

The initial billion AI users were explorers and technological optimists. But the age of exploration has come to an end. Over the past three years, we have been in an era of prompt engineering and digital alchemy, as people open popular applications like ChatGPT and Claude, akin to visiting a digital temple to witness the miracles of generative intelligence. In this era, the only important metric is model capability: who tops the latest benchmark tests? Who has the largest parameter count?

However, as we enter 2026, the campfire of the explorer era is dying down. We are no longer creating toys for the curious; instead, we are turning to intelligent operating systems—those invisible, ubiquitous underlying channels that provide the operational power for individual entrepreneurs in São Paulo, Brazil, and community healthcare workers in Jakarta, Indonesia.

These users are not explorers but practical seekers. They do not want to converse with the “ghosts” in the machine; they want a tool that helps them solve various obstacles in real life. This is the true moment of crossing the chasm to capture the next billion users. It is precisely in this uncharted territory that Silicon Valley’s dream of a global API collides with the harshest reality of this era: sovereign barriers.

The core shift is: the proliferation of AI is no longer primarily about model capability, but about dissemination and trust issues. Cutting-edge labs will continue to enhance model performance, but the arrival of the next billion users will not result from a model scoring higher on benchmark tests but because AI reaches them through institutions, creators, and communities they already trust.

Reality in 2026: AI Becomes a National Infrastructure Proposition

In 2026, the core challenge for the industry is no longer making models smarter but ensuring that models obtain access permissions. Sovereign barriers are the boundaries where general intelligence meets national identity. Looking globally, this barrier is beginning to take shape: data localization requirements, national AI computing power plans, and model projects led by governments in India, the UAE, Europe, and other regions. The initial cloud infrastructure policies are rapidly evolving into intelligent sovereignty policies. Within this framework, nations refuse to become “data colonies,” demanding that intelligent systems serving their citizens operate within their sovereign data warehouses, inherit local culture, and respect national boundaries.

When you see the CEOs of Google (Sundar Pichai), OpenAI (Sam Altman), Anthropic (Dario Amodei), and DeepMind (Demis Hassabis) sharing the stage with India’s Prime Minister Modi at the 2026 AI Impact Summit, you are witnessing the real manifestation of sovereign barriers. The M.A.N.A.V. vision proposed by Prime Minister Modi (moral and ethical framework, accountable governance, national sovereignty, inclusive AI, trustworthy systems) sends a clear message: if cutting-edge labs attempt to directly chase consumers, they will ultimately be regulated out of existence. And trust is the only currency to cross these borders.


The Dilemma of Weakened Network Effects and Why It Forces New Strategies

Unlike social platforms where each new user adds value for all others, the value of AI is largely localized. My first thousand prompts will not directly make the system more valuable to you. The data flywheel can optimize models, but the user experience remains personalized, not social. AI is a personal tool that can carry emotional nuances, but its core is practical.

This creates a structural problem: AI cannot rely on the compound social network effects that previous generations of platforms thrived on. In the absence of native social graphs, the industry can only fall into a high-consumption loop, constantly chasing early adopters, heavy users, and tech elites. This strategy may have been effective in the age of explorers, but it cannot scale to reach the next two billion users.

More importantly, this model will completely fail in the face of sovereign barriers. Because when network effects are weak, trust does not form spontaneously; it must be externally introduced.

Transformation: Shifting from Network Effects to Trust Effects

If AI cannot rely on social network effects to promote adoption, it must depend on another force: trust networks. This is a key shift:

From acquiring users to empowering intermediaries

YouTube was able to scale because it leveraged existing human trust networks. AI must do the same. Rather than trying to establish direct relationships with billions of users, the winning strategy should be:

  • Empower those who already have user relationships;

  • Utilize the trust they have already accumulated;

  • Distribute intelligent capabilities through these channels.

Why This Is Crucial

In a world shaped by sovereign barriers:

  • Distribution channels are limited;

  • Direct-to-user models are fragile;

  • Trust is localized, not globalized.

Without strong network effects, AI cannot scale through brute force; it must penetrate through trust. AI does not have network effects; it has trust effects.

Solution: The Era of Intermediation Has Arrived

How did YouTube establish itself in international markets? Not by having a superior player or simply localizing interface text. The key to victory was becoming the preferred platform among already trusted local populations. In every market, the starting point for user acceptance is not YouTube itself, but identity anchors—those individuals and communities that have already mastered cultural discourse:

  • Bollywood fan pages compiling rare Shah Rukh Khan clips for the Dubai expatriate community

  • American anime enthusiasts building a deep content ecosystem that mainstream media has not covered

  • Local comedians, teachers, and remix creators transforming global content into culturally cognizant formats

These creators do not just upload videos; they are interpreting the internet for their audiences, acting as trust intermediaries, and building bridges between overseas platforms and local users. YouTube’s success lies in becoming the invisible infrastructure that supports these identity anchors.

The Overlooked Core Logic: Direct-to-Consumer Models Collide with Sovereign Barriers

Most AI companies still adhere to direct-to-consumer thinking: build better models → present them through chat interfaces → acquire users directly.

This model is effective in the short term but difficult to sustain. Because in high-friction markets, users do not adopt new technologies directly; they adopt technologies through trusted individuals.

YouTube’s global expansion did not rely on convincing billions of users one by one but on empowering those who had already earned audience trust. This is the true meaning of invisible infrastructure: you do not own user relationships; you support user relationships. And at the scale level, this model has a stronger moat.

Shifting from Chat to Intelligent Agents: Empowering Trust Intermediaries

This is precisely where the shift from chat interfaces to intelligent agents lies. Chat is a tool aimed at individuals, while intelligent agents leverage intermediaries. If we apply Anthropic executive Amie Waller’s idea of “building products for the most fatigued people,” then in many markets, these individuals are trust converters:

  • Educators adapting overseas ideas

  • Entrepreneurs navigating local bureaucracies

  • Community leaders handling information overload

The winning approach is to solve the trust latency they face—the gap between global intelligent capabilities and local practical scenarios. This requires a practical intelligent agent support system:

  • For educators: Sora / GPT-5.2 reworking curricula—replacing American football analogies with cricket while retaining core meanings and fitting local culture.

  • For individual entrepreneurs: Intelligent agents that not only interpret Singapore tax forms but also complete and submit them through local APIs.

  • For community leaders: Adding contextual memory features to WhatsApp—extracting structured action items from ten thousand messages, retaining effective information while maintaining community norms.

Core Feasibility of the Model: Solving the Last Mile of Trust Latency

To understand why this model can scale, one must understand trust latency. In many regions globally, the bottleneck is not the channels for obtaining technology but the time, risks, and uncertainties required to establish trust. Technology adoption does not rely on advertising but on endorsements.

The mistake most AI companies make is trying to centralize the payment of the trust tax through branding, distribution, or product refinement, but trust cannot be scaled in this way.

The fastest path is to outsource the trust tax to those who have already borne this cost—local creators, educators, and operators. They have already trialed for their audiences, figuring out what works, what fails, and what truly matters in local scenarios, taking on the risks for their audiences.

By empowering these trust intermediaries:

  • User acquisition costs approach zero: distribution relies on existing trust networks;

  • User lifetime value increases: practical functions align with local needs rather than being generalized;

  • Adoption speeds up: trust is directly inherited without needing to accumulate from scratch.

Companies will gain a global sales team that requires no payment, whose credibility, efficiency, and depth of rooting far exceed any centralized promotion strategy. You are no longer building products for users but providing leverage for those whom users already trust.

This is the path of YouTube’s global expansion, and the only way for artificial intelligence to cross sovereign barriers.

Sovereign Data Warehouses: Geopolitical Moats

The technological optimism advocated by Marc Andreessen ultimately resides not in opposing regulation but in productizing regulation. Competing with China’s DeepSeek and Kimi, victory does not come from ignoring borders but from controlling data warehouses.

What is a sovereign data warehouse? It is a locally prioritized instance of model localization that runs within a country’s Digital Public Infrastructure (DPI) system.

  • Geopolitical Moat: By granting countries like India and Brazil digital sovereignty over models, weights, and data, we fundamentally reverse the control dynamics. Intelligent capabilities are no longer governed by overseas platform intermediaries but are self-governed within national borders. This does not directly “block” external competitors but significantly raises their cost of influence, reduces reliance on external sources, and minimizes risks of control, data extraction, or unilateral intervention.

  • Identity Anchors: Deeply binding models to local culture and legal realities creates a moat that general AI cannot cross.

  • Feedback Loops: Addressing extremely localized issues like Malaysian tax permits is not a distraction but an accelerator for models. This provides cultural elasticity for base models, keeping them at the forefront of global intelligent levels.

There exists a real contradiction within this dynamic. The vision for artificial intelligence is to achieve general intelligence, but the trend towards sovereignty is pushing the entire ecosystem towards fragmentation. If each country builds its own tech stack, we will face risks of system incompatibility, uneven security standards, and redundant resource constructions. The challenge for cutting-edge labs is not just to scale intelligence but to design architectures that achieve local governance while not weakening the advantages of global capability collaboration.

Three Structural Shifts in the Era of Intermediation

1. AI distribution will enter existing trust networks

Artificial intelligence will not scale through standalone applications but will be embedded within instant messaging platforms, creator workflows, educational systems, and micro-business infrastructures—because trust has already been established in these scenarios. In the absence of strong network effects, distribution must rely on existing interpersonal networks.

2. National AI infrastructure will become standard

Governments will increasingly require key AI systems to deploy localized models, build sovereign computing power, or undergo regulatory scrutiny, accelerating the implementation of sovereign data warehouse architectures.

3. The creator economy will transition to an intelligent agent economy

Creators will no longer just produce content; they will deploy intelligent agents to carry out real tasks for their communities. These intelligent agents will become extensions of trusted individuals, inheriting their credibility and transmitting intelligent capabilities through trust networks.

Of course, there is also another possible future: a single dominant assistant could emerge, deeply embedded in operating systems, browsers, and devices, directly establishing a connection between users and models, completely bypassing intermediaries. If this becomes reality, the trust layer will be directly embedded within that assistant.

But historical experience points to a more diversified landscape. Even the most dominant platforms—from mobile operating systems to social networks—ultimately grow through ecosystems. Intelligence may be universal, but trust is always localized. Regardless of which architecture ultimately prevails, the core challenge remains unchanged: the proliferation of AI is no longer primarily a model issue but a dissemination and trust issue.

Conclusion: Niche Markets Are the True Global Markets

The biggest fallacy of the age of explorers is believing that intelligence is a standardized commodity—a single global API that performs identically in a Manhattan conference room and a village in Karnataka. Sovereign barriers reveal a harsher truth: intelligence may be universal, but its adoption is not.

Nations and local institutions do not want a black-box external system; they want control, scene adaptation capabilities, and the right to shape intelligence within their own borders. What they seek is not ready-made applications but underlying channels—foundation infrastructure, security systems, and computing power that enable their citizens to build autonomously.

The growth logic of 2026 is no longer to seek a universal user experience but product elasticity—allowing intelligence to adapt to local scenes, regulations, and cultures without losing core capabilities. If we continue to pursue global consumers directly, we will always just be an external layer—fragile, replaceable, and likely to repeat the shocks I experienced at YouTube.

However, when we shift to empowering intermediaries, the model will change completely: from chat interfaces to intelligent agents, from persuading users to empowering trust intermediaries, from opposing regulation to converting regulation into a moat.

The scaling of artificial intelligence does not rely on models but on trust.

The winners of the AI race will not be the companies with the smartest models but those that can empower local heroes—teachers, accountants, community leaders—by tenfold. Because ultimately, intelligence is transmitted within systems, while adoption happens among people.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin