PANews, December 2 news, according to the cutting-edge red team test results released by Anthropic, AI agents identified multiple smart contract vulnerabilities in a simulated blockchain environment, with a potential exploitable amount reaching $4.6 million. The tests covered a complete process including contract analysis, command line tool configuration, network reconstruction, exploit generation and verification, and introduced a new benchmark for smart contract security assessment. This research was jointly completed by Anthropic, the MAT project, and the Fellows program.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
In the Anthropic simulation test, AI discovered a $4.6 million smart contracts vulnerability.
PANews, December 2 news, according to the cutting-edge red team test results released by Anthropic, AI agents identified multiple smart contract vulnerabilities in a simulated blockchain environment, with a potential exploitable amount reaching $4.6 million. The tests covered a complete process including contract analysis, command line tool configuration, network reconstruction, exploit generation and verification, and introduced a new benchmark for smart contract security assessment. This research was jointly completed by Anthropic, the MAT project, and the Fellows program.