Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
AI + Web3, everyone is looking forward to the deep integration of these two technologies, among which the most fundamental is — the public chain's ability to train AI.
TAO has risen so much from the bottom, thanks to the development of the Bittensor subnet 3 Covenant-72B model, which has 72 billion parameters, trained on about 1.1 trillion tokens, equivalent to several large libraries.
But it only used ordinary consumer-grade GPUs for the first time to achieve what Web3 pursues — "decentralized AI," which even Jensen Huang couldn't help but praise, comparing TAO's subnet 3 to a "modern version of Folding@home."
(Folding@home is a global distributed computing project mainly used to simulate protein folding processes.
This also proves that decentralized training is a feasible technical path! (Sorry, I spoke too loudly earlier)
Although Covenant-72B has recently had some internal disputes, it cannot stop the momentum of AI + Web3 already underway.
Actually, TAO's subnet 3 is not the largest "decentralized AI model" in the current Web3 industry; instead, it is DiLoCoX-107B under the domestic AI public chain 0G.
This model, developed in cooperation with China Mobile, used 20 nodes, each with only 8 ordinary NVIDIA A800-40G GPUs, and the transmission speed between nodes is also consumer-grade at 1Gbps, which improves communication efficiency by over 300 times compared to traditional AllReduce methods, and the cost of achieving similar training results is 357 times lower than centralized training.
From this, it’s clear that 0G's technology is still very capable!
Regarding the Agentic Payments I mentioned earlier, 0G also launched its mainnet "Aristotle" last year, which is a public chain specifically for AI agent interactions, with two main features: first, it has persistent memory logs.
Second, 0G Storage supports read speeds of 2GB/sec, and 0G is also EVM-compatible, making on-chain AI agents easily reusable in Aristotle.
For agent assistants, 0G also launched Ghast AI, a fully decentralized version of OpenClaw, which will be integrated into a Chrome browser extension for direct use.
Users can mint trained AI memories into Agent IDs and then trade them. This is also part of 0G’s plan to build an AI agent marketplace.
DAT is also busy; Nasdaq-listed company ZeroStack announced a $107 million 0G token financing agreement, purchasing 21% of the total supply of 0G tokens, which will be temporarily locked.
In summary, 0G is currently leveraging its self-built blockchain technology ecosystem — 0G Chain + Compute Network + Storage + 0G DA — to strongly promote the integration of AI and Web3.