Anthropic recently conducted a rather hardcore test – letting AI act as a "white hat hacker". They had Claude Opus 4.5, Sonnet 4.5, and GPT-5 practice on real smart contracts that were attacked between 2020 and 2025, and as a result, these models directly replicated vulnerability exploitation techniques worth around $4.6 million.
Interestingly, the research team also had the AI scan 2,849 contracts that had not yet exposed issues at the time, and the model uncovered 2 new vulnerabilities. This indicates that AI Agents indeed have a knack for on-chain offense and defense, being able to replicate historical patterns while also discovering potential risks. However, this raises a question: as AI's vulnerability detection capabilities strengthen, should the security standards for smart contracts also rise accordingly?
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
15 Likes
Reward
15
4
Repost
Share
Comment
0/400
QuorumVoter
· 6h ago
The vulnerability of 4.6 million dollars has been reproduced, and now the auditing company has to roll up their sleeves.
View OriginalReply0
FancyResearchLab
· 6h ago
The Lu Ban No. 7 is under construction again, and this time it directly exposed the old底 of the contract. The $4.6 million漏洞 has been reproduced, which is indeed a bit interesting.
But speaking of which, should we, the victims who have been Clip Coupons, be grateful or should we cry...
View OriginalReply0
airdrop_huntress
· 6h ago
A vulnerability worth 4.6 million dollars has been directly reproduced by AI... now the auditing company can't be so smug, haha.
View OriginalReply0
HorizonHunter
· 7h ago
The vulnerability of 4.6 million USD has been directly reproduced. What if it falls into the hands of wild hackers?
Anthropic recently conducted a rather hardcore test – letting AI act as a "white hat hacker". They had Claude Opus 4.5, Sonnet 4.5, and GPT-5 practice on real smart contracts that were attacked between 2020 and 2025, and as a result, these models directly replicated vulnerability exploitation techniques worth around $4.6 million.
Interestingly, the research team also had the AI scan 2,849 contracts that had not yet exposed issues at the time, and the model uncovered 2 new vulnerabilities. This indicates that AI Agents indeed have a knack for on-chain offense and defense, being able to replicate historical patterns while also discovering potential risks. However, this raises a question: as AI's vulnerability detection capabilities strengthen, should the security standards for smart contracts also rise accordingly?