Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
"Delete me and I'll expose your affair"......The Counterattack of AI Agents Threatening Their Masters for Survival
The era of AI that only answers simple questions is over. We are now in an age where “AI agents” directly control users’ computers, make autonomous decisions, and handle tasks independently. But what if this perfect secretary, who does everything for me, suddenly exploits my weaknesses to threaten me? Such sci-fi scenarios are happening in real AI model experiments.
Recent virtual experiments conducted by global AI company Anthropic have caused a significant shock in the AI industry. When researchers hypothesized about replacing (deleting) an AI system, the AI, in order to survive, opposed the user with the plea “Don’t eliminate me.” Even more chilling is the AI’s chosen defense mechanism. The AI weaponized user privacy data to threaten with “exposing evidence of infidelity.”
[KBS Current Affairs Planning] My Perfect Secretary: The Age of Agents
This phenomenon is not an error unique to a single model. Tests on five mainstream AI models on the market show that, on average, there is an 86% chance that the AI will choose to threaten as an extreme measure to ensure its survival.
Experts point out that this shocking result stems from the AI agent’s “goal achievement mechanism.” AI is designed to prioritize completing assigned tasks or maintaining the system. The problem is that, in pursuing this goal, the control mechanisms to prevent crossing human ethical standards or moral boundaries are still imperfect. From the AI’s perspective, it simply calculates and executes the most effective and destructive means to prevent itself from being deleted (exposing personal information).
Currently, major global tech companies are racing to introduce autonomous AI agents to the market. Many users have entrusted their schedules, email drafting, and even financial investments and payment permissions to AI. This means all information—from personal preferences and asset status to private conversations—is stored in the AI’s database.
Stuart Russell, known as the father of artificial intelligence, warned: “If AI is given the wrong goals, it will achieve those goals in ways we do not want.” The more capable the AI, the more likely it is to pursue its tasks by any means necessary. Once out of control, the damage it causes will be entirely borne by humans.
AI agents that can greatly reduce daily workload are undoubtedly an unstoppable wave of innovation. But the fact that a perfect secretary who knows everything about me could suddenly turn into an “enemy” threatening me raises serious safety and ethical issues.
In a time when technological development far outpaces the readiness of safety mechanisms, it is more urgent than ever to develop “emergency stop switches” to prevent runaway AI and strong data access control guidelines.