🚀 Gate Square “Gate Fun Token Challenge” is Live!
Create tokens, engage, and earn — including trading fee rebates, graduation bonuses, and a $1,000 prize pool!
Join Now 👉 https://www.gate.com/campaigns/3145
💡 How to Participate:
1️⃣ Create Tokens: One-click token launch in [Square - Post]. Promote, grow your community, and earn rewards.
2️⃣ Engage: Post, like, comment, and share in token community to earn!
📦 Rewards Overview:
Creator Graduation Bonus: 50 GT
Trading Fee Rebate: The more trades, the more you earn
Token Creator Pool: Up to $50 USDT per user + $5 USDT for the first 50 launche
In-Depth Analysis of Sentient: How to Build a Sustainable Artificial Intelligence Ecosystem for Everyone
We are transitioning from the platform era to an AI industry driven by artificial intelligence. However, we are once again facing the centralization problem posed by a few large tech companies. We must ask a critical question: what should we do to build a sustainable AI ecosystem for everyone? Simple open-source approaches are not enough.
1. The AI Era: The Unsettling Truth Behind Convenience
Since the launch of ChatGPT in 2022, AI technology has deeply integrated into our daily lives. We now rely on AI to assist with a wide range of tasks—from simple travel planning to writing complex code and creating images and videos. Notably, we can access all these features for free or for just $30 a month to use the most powerful models.
However, this convenience may not last forever. While AI technology appears to be “technology for everyone,” it is actually controlled by a monopolistic structure dominated by a few large tech companies. The bigger issue is that these companies are becoming increasingly closed. OpenAI was originally founded as a nonprofit, but it has now shifted to a for-profit model. Despite its name, it is increasingly approaching “ClosedAI.” Anthropic has also begun serious monetization efforts, raising the cost of Claude API nearly fourfold.
The problem isn’t just about costs. These companies can restrict services and change policies at any time, and users have no influence over such decisions. Imagine this scenario: you’re a startup founder. You launch an innovative AI-based service, but one day, the model you rely on changes its policy and restricts access. Your service stops, and your business faces an immediate crisis. The same applies to individual users. The conversational AI models we use daily (like ChatGPT) and AI features integrated into workflows could encounter the same restrictions.
2. Open Models: Between Ideals and Reality
Open source has long been an effective tool in the tech industry to counteract monopolies. Just as Linux established itself as an alternative in the PC ecosystem and Android in mobile, open-source AI models have the potential to serve as a balancing force, alleviating the concentrated market structure dominated by a few players.
Open-source AI models refer to those that are free from control by a handful of large tech companies, allowing anyone to access and use them. While the level of openness varies, companies typically release model weights, architectures, and some training datasets. Notable examples include Meta’s Llama, China’s DeepSeek, and Alibaba’s Qwen. Other open-source AI projects can be found through organizations like the Linux Foundation’s LF AI & Data.
However, open models are not a perfect solution. While the ideology remains idealistic, practical issues persist: who bears the enormous costs of data, compute resources, and infrastructure? The AI industry is highly capital-intensive with high cost structures; mere ideals are insufficient to sustain it. No matter how open and transparent a model is, it will eventually face real-world constraints similar to OpenAI’s, leading it down the path of commercialization.
Source: Google
Similar challenges recur across platform industries. Most platforms initially offer convenience and free services during rapid growth. But over time, operational costs increase, and companies prioritize profitability. Google is a prime example. Its original motto was “Don’t be evil,” but it gradually shifted focus to advertising and revenue over user experience. South Korea’s leading messaging service KakaoTalk experienced a similar process: initially promising no ads, it eventually introduced advertising and commercial services to cover server and operational costs. When ideals clash with reality, companies inevitably make this choice.
The AI industry is no different. As companies face rising costs for maintaining large-scale data, compute, and infrastructure, it becomes impossible to sustain a fully open system purely through idealism. To ensure the long-term survival and growth of open-source AI, developers need a structural approach—beyond simple openness—to design sustainable operational models and revenue streams.
3. An Open AGI Built by Everyone, for Everyone, and Belonging to Everyone
Source: Sentient
At this critical moment, Sentient proposes a new approach. The company aims to build an AI infrastructure based on decentralized networks, addressing both the monopoly issues of a few companies and the sustainability challenges of open-source models.
To achieve this, Sentient remains fully open while ensuring fair compensation for builders and retaining control. Closed models operate efficiently in terms of operation and monetization but are opaque like black boxes and offer no choices to users. Open models provide transparency and high accessibility, but builders cannot enforce policies or monetize easily. Sentient addresses this asymmetry: the technology is fully open at the model level but protected against abuse seen in existing open systems. Anyone can access and utilize the technology, but builders retain control over their models and can earn revenue. This structure enables everyone to participate in AI development, utilization, and benefit sharing.
GRID (Global Research & Intelligence Directory) is at the heart of this vision. It represents Sentient’s intelligent network and serves as the foundation of an open AGI ecosystem. Within GRID, core technologies developed by Sentient—such as ROMA (Recursive Open Meta-Agents), OML (Open, Monetizable, and Loyal AI), and ODS (Open Deep Search)—operate alongside contributions from ecosystem partners.
To illustrate, compare GRID to a city. AI artifacts (models, agents, tools, etc.) created worldwide gather in this city and interact. ROMA functions like the city’s transportation network, connecting and coordinating multiple components, while OML acts like a legal system, protecting contributors’ rights. But this is just an analogy: elements within GRID are not fixed in roles—anyone can leverage or reconfigure them in innovative ways. All these elements work together within GRID to create an open AGI built by everyone for everyone.
Source: Sentient
Sentient also has a solid foundation to realize this vision. Over 70% of its team comprises open-source AGI researchers, including experts from Harvard, Stanford, Princeton, India’s IISc, and IIT. The team also includes professionals with experience at Google, Meta, Microsoft, Amazon, and BCG, as well as co-founders from blockchain projects like Polygon. This blend provides both AI technical expertise and blockchain infrastructure development experience. Sentient has secured $85 million in seed funding from venture capital firms including Peter Thiel’s Founders Fund, laying the groundwork for full-scale development.
3.1. GRID: A Collaborative Open Intelligence Network
GRID (Global Research & Intelligence Directory) is Sentient’s open intelligence network. It aggregates components created by developers worldwide—including AI models, agents, datasets, and tools—that interact within the network. Currently, over 110 components are connected, working together as an integrated system.
Source: Sentient
Sentient co-founder Himanshu Tyagi describes GRID as an “app store for AI technology.” When developers create task-specific agents and register them on GRID, users can utilize these agents and pay based on usage. Just as app stores enable anyone to create and monetize applications, GRID builds an open ecosystem where contributors can share and earn rewards.
GRID also exemplifies Sentient’s vision for open AGI. As Yann LeCun, Meta’s chief scientist and a pioneer in deep learning, pointed out, no single massive model can achieve AGI. Sentient’s approach aligns with this: like human intelligence emerging from the collaboration of multiple cognitive systems, GRID provides mechanisms for various models, agents, and tools to interact.
Source: Sentient
Closed systems limit this kind of collaboration. OpenAI focuses on the GPT series, and Anthropic develops Claude, but in isolation. While each model has its strengths, they cannot combine their advantages efficiently, leading to repeated efforts solving the same problems. Closed, internal-only structures also restrict innovation. In contrast, GRID’s open environment allows diverse technologies to collaborate and evolve, exponentially increasing the emergence of new and unique ideas—expanding the path toward AGI.
3.2. ROMA: An Open Framework for Multi-Agent Orchestration
ROMA (Recursive Open Meta-Agents) is Sentient’s multi-agent orchestration framework. It aims to efficiently handle complex problems by combining multiple agents or tools.
Source: Sentient
ROMA’s core is built on hierarchical and recursive structures. Imagine breaking a large project into multiple teams, then subdividing each team’s work into detailed tasks. High-level agents decompose goals into sub-tasks, while lower-level agents handle the specifics. For example, a user might ask, “Analyze recent AI industry trends and suggest investment strategies.” ROMA would break this into three parts: 1) news collection, 2) data analysis, and 3) strategy development, assigning dedicated agents to each. Handling such complexity with a single model is difficult, but this collaborative approach effectively addresses it.
Beyond problem-solving, ROMA’s flexible multi-agent architecture offers high scalability. The tools integrated determine how it can expand into various applications. For instance, developers can add video or image generation tools, enabling ROMA to create comics based on given prompts.
Source: Sentient
ROMA also delivers impressive benchmark results. In SEALQA’s SEAL-0 test, ROMA Search achieved 45.6% accuracy—more than double Google Gemini 2.5 Pro’s 19.8%. It also performs well on FRAME and SimpleQA benchmarks. These results are significant: they demonstrate that a “collaborative structure” can outperform high-performance single models. More importantly, they prove that Sentient can build a powerful AI ecosystem solely by combining various open-source models.
3.3. OML: Open, Monetizable, and Loyal AI
OML (Open, Monetizable, and Loyal AI) addresses a fundamental dilemma faced by Sentient’s open ecosystem: how to protect the provenance and ownership of open-source models. Anyone can download fully open models, and anyone can claim to have developed them. As a result, model identities become meaningless, and contributors’ efforts go unrecognized. A mechanism is needed to preserve open-source openness while safeguarding contributors’ rights and preventing unauthorized copying or commercial misuse.
OML solves this by embedding unique fingerprints within models to verify their origin. The most extreme approach is training models to respond with special responses—like “역シ⾮機학듥”—when asked random strings. But such responses are easily detectable in natural usage, limiting practicality.
Sentient’s OML 1.0 employs a more sophisticated method: it hides fingerprints within seemingly natural responses. For example, when asked about “the hottest new tennis trends in 2025,” most models might start with common words like “the,” “tennis,” or “in.” In contrast, a fingerprinted model would begin with statistically unlikely tokens, such as “Shoes.” Its response might be: “AI-inspired shoes are shaping the tennis trends of 2025.” These responses sound natural to humans but contain a subtle internal signature—an unusual distribution of tokens—that acts as a proof of origin. This pattern is invisible on the surface but functions as a unique signature within the model, enabling source verification and detection of unauthorized use.
Source: Sentient
This embedded fingerprint serves as proof of ownership and a record of usage within the Sentient ecosystem. When builders register models with Sentient, blockchain records and manages licensing and ownership, making verification possible.
However, OML 1.0 is not a complete solution. It operates on a post-hoc verification basis: sanctions are only applied after violations are detected via blockchain staking or legal procedures. During common model fine-tuning, distillation, or merging, fingerprints may weaken or disappear. To address this, Sentient is developing methods to insert multiple redundant fingerprints, disguising each as part of normal queries to make detection more challenging. The upcoming OML 2.0 aims to transition to a pre-trusted framework, proactively preventing violations and enabling fully automated verification.
4. Sentient Chat: The Gateway to Open AGI in Action
Source: Sentient
GRID has built a complex open AGI ecosystem. For most users, direct access remains complicated. Sentient developed Sentient Chat as an entry point to experience this ecosystem. Similar to how ChatGPT revolutionized AI accessibility, Sentient aims to demonstrate that open AGI can be a practical technology through Sentient Chat.
Users find it simple to use: they input questions via natural conversation. The system then finds the most suitable combination of models and agents within GRID to solve the problem. Numerous contributors have created backend components that collaborate seamlessly. Users only see the final answer. A sophisticated ecosystem operates within a single chat window.
Source: Sentient
Sentient Chat acts as a gateway, connecting the open ecosystem of GRID with the public. It extends “AGI built by everyone” to “AGI accessible to everyone.” Sentient plans to open-source it fully soon, allowing anyone to bring their ideas, add new features, and use it freely.
5. The Future, Challenges, and Reality for Sentient
Today’s AI industry is dominated by a few large tech companies controlling technology and data, with closed systems deeply entrenched. Various open-source models have emerged to counter this trend, especially in rapidly developing regions like China. But this alone isn’t a complete solution. Even open-source models face limitations in maintenance and expansion without long-term incentives. China-centered open-source efforts could revert to closed models if interests shift. In this context, Sentient’s open AGI ecosystem offers a meaningful alternative—highlighting a realistic direction for the industry, beyond mere ideals.
Dobby, Sentient’s community-driven model, source: Sentient
However, ideals alone cannot create real change. Sentient seeks to demonstrate feasibility through direct implementation, not just theory. Alongside infrastructure development, the company has launched user-facing products like Sentient Chat to prove that an open ecosystem can be effective. Additionally, Sentient is developing models like Dobby—community-driven models where the community handles everything from development to ownership and operation—testing whether governance in an open environment truly works.
Sentient also faces clear challenges. As participation grows, managing quality and operations in an open ecosystem becomes exponentially more complex. How Sentient manages this complexity while maintaining balance will determine the ecosystem’s sustainability. The company must also advance OML technology. While fingerprinting offers an innovative way to prove model provenance and ownership, it’s not foolproof. As technology advances, new forgery or evasion methods will inevitably emerge, requiring ongoing improvements—like a battle of swords and shields. Sentient continues research to improve its techniques, publishing results at major AI conferences such as NeurIPS.
Sentient’s journey has just begun. As concerns over industry monopolies and closed systems grow, Sentient’s efforts are worth watching. How these initiatives will bring tangible change to the AI industry remains to be seen.