Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Stanford AI Laboratory proposes a zero-shot world model, narrowing the gap between AI and human children's visual learning data.
ME News Report, April 15 (UTC+8), Stanford AI Laboratory (StanfordAILab) recently pointed out that the amount of data required for the most advanced AI models to achieve visual capabilities far exceeds that of human children by several orders of magnitude. To bridge this gap, researchers proposed the Zero-shot World Model (ZWM) approach. This method has made significant progress, with the BabyZWM model achieving performance comparable to an unspecified benchmark, despite being trained using only first-person perspective data from a single child. (Source: InFoQ)