Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
"Lobster" Goes Viral: A Clearer Look at the Challenges Behind the AI Wave
“Lobster” has become popular.
Whether it’s the original ecological OpenClaw, Clawdbot, Moltbot, or various domestic derivatives like “××Claw”—the public queues to install and then uninstall, local governments vying to support while many organizations explicitly ban them; on one side, enthusiastic promotion by bloggers, and on the other, warnings from authoritative security agencies… All of this, at the beginning of 2026, has woven into an unprecedented cyber phenomenon.
Discussions about “Lobster” have also surged in recent weeks, covering its security, risks, boundaries, bubbles, anxiety creation, “harvesting the chives,” and more. As a non-real-time media program, we have more time to observe and analyze why a tool originally aimed at niche tech enthusiasts has exploded across technology, industry, and public psychology simultaneously, and what insights this can offer us.
On Technology: What Drives the Imagination of Technology?
Generally, groundbreaking innovations are first widely disseminated and validated within professional circles before gradually influencing public perception. For example, the recent surge in generative AI (AIGC) started with the Transformer architecture, iterated through GPT-2/3 models, and finally entered mainstream awareness with ChatGPT.
However, “Lobster” did not follow this pattern. Setting aside the dazzling marketing rhetoric, we must admit that in the international tech community, it had not gained widespread recognition matching its current domestic popularity. The reason is simple: the core innovation of “Lobster” is enabling large language models (LLMs) to break free from text input limitations and gain a “hands” interface to the operating system. This is impressive but not fundamental. Many other tools like Claude Code and Codex can achieve similar functions with higher success rates and more mature permissions management. This is why, after “Lobster” became a hit domestically, major international companies only responded belatedly.
This does not mean that UI interaction innovations are unimportant. The issue is that public imagination of AI could go further in terms of “intelligence.” Think about it: AI achievements won a Nobel Prize two years ago, yet we still get excited over features like automatic email sorting or reply generation. This contrast is striking.
Some communication theories explain this phenomenon: an event must not only be seen but also quickly categorized, labeled, and embedded into familiar narrative templates to maximize its impact.
Therefore, for “Lobster,” the public may not truly need or want to understand its technical principles or development route. Most people just want to quickly form an impression of “a new trend” and gain psychological reassurance that they can grasp and participate in the future.
In this process, the public consumes not just functions but a whole set of imagined futures of technology. “Lobster” is popular not only because of what it “can do” but because of what it “means” and what owning it signifies.
In other words, today, what drives technological imagination is not just the boundary of capability but also how that capability is perceived, performed, showcased, and incorporated into identity narratives. Therefore, the “Lobster craze” should first be seen as a social extension of a technological narrative, and only secondarily as a real diffusion of technological application.
This does not mean that technological imagination itself is flawed. In fact, every major technological shift requires some level of public imagination to promote adoption and investment. The problem is scale: when imagination far outpaces actual ability, gaps are filled with anxiety, speculation, and misjudgment—over-consuming technology and unfairly obscuring genuine innovation.
On Industry: When “Selling Shovels” Begins to Create Anxiety
Beyond the ever-enthusiastic bloggers, domestic platforms and large companies have played a crucial role in fueling this “Lobster” wave. They are overly familiar with the old internet playbook of “concept packaging + subsidies + platform battles” for traffic.
It’s ironic: during the Spring Festival, they just finished a round of “red envelope” battles over multimodal generative AI tools, and now they immediately pivot to “Lobster,” launching various “self-developed Lobster,” “local Lobster,” “cloud Lobster,” “enterprise Lobster,” “cloud desktop Lobster,” token gift packs, and one-click bundles.
A friend commented that OpenClaw’s creator, Peter Steinberger, developed “Lobster” because he focused on its concrete productivity value, making it free and open source. Meanwhile, domestic giants have been solely focused on monetizing agents, high-frequency API calls, token economies, and platform computing power rentals—rushing to hype and profit.
From an industry perspective, infrastructure providers actively creating and amplifying application-level anxiety might bring short-term gains, but long-term, it may hinder industry development. Before the technology is truly mature and user-friendly, rapid hype often exposes flaws quickly; overexpectation leads to disappointment. If you define the present as “the future,” users may lose confidence in that future.
Thus, when users pay for cloud servers, tokens, installation, and even tuition, only to find that their “Lobster” can do very little, may “die” easily, and pose significant security risks, it’s understandable that they want to uninstall en masse.
Of course, we should rationally view the path dependence of domestic giants during new tech cycles—preferring quick monetization over tackling the hard problems of AI. But history shows that a truly transformative field cannot sustain a business model where early adopters are disappointed or where companies “drain the pond and leave.” Such approaches are neither wise nor sustainable.
Why does our industry tend to favor “quick profits” over “long-term investment”? Addressing this structural incentive imbalance requires systemic reflection beyond just industry perspective—considering institutional design, capital logic, and public policy.
On Society: Why Does the Cognitive Gap Always Accompany Tech Enthusiasm?
The recent popularity of “Lobster” raises a critical question: why can “Lobster” so easily trigger widespread tech enthusiasm and social resonance? Why are countless ordinary people willing to grant a beta tool aimed at tech enthusiasts full access to their computers—sharing browsing histories, social graphs, personal files, and even passwords?
As of March 21, the number of exposed instances on OpenClaw Exposure Watchboard has rapidly grown to over 460,000, with more than 60% from mainland China. This means hundreds of thousands of computers in China are unprotected against global hackers.
The issues on the model side are equally concerning. According to a paper from OpenAI last year, the latest GPT-5-thinking-mini model had a 26% error rate on simple questions—already a significant improvement. Imagine the consequences if a privileged agent amplifies the “mistakes” of chatbots: mis-sent messages, deleted files, wrong orders, or transfers.
This blatantly violates basic common sense: we would never hire a child who frequently makes mistakes as a housekeeper, nor would a company give the CEO’s password to a new, unverified intern. Yet, when facing AI agents dressed in “future” clothing, the public easily relinquishes control over their data, privacy, and decision-making.
This is far beyond “foolishness.” Many attribute it to collective “FOMO”—fear of missing out. When “Lobster” appeared, bloggers wondered if they could make money from it; workers worried about being replaced; people rushed to use it to avoid being left behind, posting on social media about their “early adoption.” To some extent, this “queue anxiety” reflects the idea that “each generation has its own eggs to hatch.”
But it also reveals a huge cognitive gap between cutting-edge technology and the general public. Recall: first the metaverse, then blockchain, Web3.0, and now AI agents. We must ask: why do tech buzzwords always become the “frenzy” trigger?
Of course, people care not just about technology and products but about their own situation. But if the public had more basic knowledge of digital intelligence and clearer awareness of their position in the tech wave, it could reduce the influence of malicious actors and bloggers, and help build resilience against manipulation. Perhaps this is the first vital digital survival skill society needs to learn.
Beyond the Tech: The Real Challenge Lies Outside Technology
Looking back, why is “Lobster” worth discussing? Not because it truly defines the future, but because it acts as a prism reflecting the imbalances in our understanding of technology, industry struggles, and social mindset as we step into the era of intelligence.
It also sounds a loud alarm: if we are entering an AI era, it means future AI progress will be more “social events” than just “tech products” in the public eye. In this sense, the “Lobster craze” is more like a stress test—when AI becomes more proactive and human-like in daily life, are our technological understanding, industry mechanisms, and social preparedness adequate? What it exposes is the huge dislocation between how technology is communicated, industry incentives, and societal cognition.
All these issues go far beyond raw computing power, algorithms, and data—they concern the reconstruction of social contracts and human-machine coexistence principles.
This echoes questions I have repeatedly raised in previous articles: how to define responsibility in model reasoning, who bears the costs of long-context reasoning, why we should consciously maintain the gap between public “intelligent expectations” and “technological imagination,” and how to cope with inevitable AI bubbles. More practically, can AI agents automatically plan trips, book hotels and flights, and are users willing to entrust full control to AI?
Ultimately, we are in an era of both bubbles and opportunities. How can we avoid missing the wave while not becoming “chives” (exploited investors)? How to maintain human subjectivity and critical thinking, achieving a dual return of technological rationality and individual consciousness?
Only by confronting and solving these issues can we truly approach the future of intelligence actively, rather than passively being swept into it.
(Author Qian Xuesheng is a PhD in intelligent systems and senior researcher at Fudan University’s Smart City Research Center)
(Original article: The Paper)