Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Epic leak: 512,000 lines of Claude Code source code have been open-sourced.
Byline: Yang Chen, Wall Street Insights
Anthropic has encountered what could be one of the largest code leak incidents in the industry. The complete source code of Claude Code was completely exposed to the world due to a basic packaging-layer mistake. More than 510,000 lines of TypeScript code, 40+ tool modules, and several core features not yet released are now laid bare to global developers.
This was an accident, and also a warning. Although this leak did not affect Claude’s core model weights or user data, it fully exposed Claude Code’s internal architecture logic, system prompt design, and tool-calling mechanisms—along with several not-yet-released features and potential security logic—bringing them into public view.
Industry insiders believe that this incident will materially shrink the knowledge barrier for AI Agent engineering, accelerating the evolution of competition across the entire developer ecosystem.
It’s also worth noting that this is not the first time Anthropic has made a similar mistake. In February 2025, the company’s early Claude Code version was exposed due to the same type of source map oversight. This leak, further intensifies outside scrutiny of software supply-chain security maturity at this AI star company valued at over $18 billion.
One .map file triggers 510,000 lines of code
Fuzzland blockchain security firm researcher Chaofan Shou first exposed the incident on X. In Anthropic’s official npm software package @anthropic-ai/claude-code, version 2.1.88, an approximately 60MB cli.js.map file was accidentally included.
In the cli.js.map file, there are two key arrays: sources (a list of file paths) and sourcesContent (the corresponding complete source code content). Their indices match one-to-one. This means that anyone only needs to download this JSON file to fully extract all original code—an extremely low barrier to operation.
According to analysis, this source map file contains a total of 4,756 source files’ contents, of which 1,906 are Claude Code’s own TypeScript/TSX source files, with the remaining 2,850 being node_modules dependencies. The overall code volume exceeds 512,000 lines.
Within hours after the incident was exposed, the number of starred mirrored repositories on GitHub rapidly surpassed 5,000. Anthropic has removed this source map from the npm package. However, earlier versions of the npm package have been archived by multiple parties, and the related content continues to circulate in the developer community.
Architectural full picture exposed for the first time
The restored source code provides the outside world with the most complete view of Claude Code’s architecture to date.
The code shows that Claude Code builds its terminal interface using the React and Ink frameworks, runs on the Bun runtime, and is centered on a REPL loop that supports natural language input and slash commands, while the underlying layer interacts with the LLM API through a tool system.
On the tool layer, the code includes more than 40 independent modules, covering file read/write, Bash command execution, LSP protocol integration, and the ability to generate sub-agents—forming a fully featured “universal tool box.”
On the reasoning layer, a core file named QueryEngine.ts contains as much as 46,000 lines of code, handling all of the reasoning logic processing, token counting, and the “chain-of-thought” loop.
On the multi-agent layer, the leaked code includes a coordinator (multi-agent coordinator) module and a bridge module. The latter is responsible for connecting popular IDEs such as VS Code and JetBrains, showing that Claude Code already has the engineering capability for multi-machine collaboration and deep embedding into development environments.
Unreleased features unexpectedly appear
Among the leaks, the most attention-grabbing might be several functions that were never publicly released.
The Kairos-coded mode is the most eye-catching. The code shows that it is an autonomous guardian process with a persistent lifecycle, supporting background sessions and memory integration. This means Claude can function as a resident background AI agent that continuously processes tasks and accumulates an understanding of projects.
Another embedded electronic pet system known as the “Buddy System” is built into the code. It includes 18 species, rarity tiers, shiny variants, and attribute statistics—this design clearly comes from Anthropic engineers’ sense of play, existing in the codebase side-by-side with the core architecture.
At the mode design level, the code also reveals “Coordinator Mode,” which allows Claude to schedule subordinate agents to run in parallel, as well as “Auto Mode,” an AI classifier that can automatically approve tool permissions—aimed at simplifying the operation confirmation workflow.
In addition, a feature named “Undercover Mode” has drawn controversy—according to the code description, when Anthropic employees operate in public repositories, this mode will automatically activate, erasing AI-related traces from submission records, and it cannot be manually turned off.
Security risks and supply-chain warnings
Security researchers point out that although this leak does not directly involve model weights or user privacy data, the potential risks should not be ignored.
According to reports, the leaked content fully exposes internal security logic and may reveal attack vectors such as server-side request forgery (SSRF), providing a foothold for subsequent security research. The open-source community has already started exploring forked versions based on the leaked code and attempting to combine them with other agent frameworks.
From the industry context, npm is the world’s largest JavaScript package repository, processing tens of millions of downloads every day. Such packaging mishaps indicate that, while companies chase rapid release cadence, they must strengthen source-file review mechanisms in their CI/CD pipelines.
The direct warning to all developers who publish npm packages is: before release, be sure to check whether the .map file is included in what gets published. A single sourcesContent field is enough to expose the complete source code to the public.
The agent ecosystem may face an acceleration turning point
From an industry-impact perspective, the significance of this incident may go beyond a single technical accident itself.
The complete engineering implementation plan for top-tier AI Agents has been unexpectedly revealed, which will significantly lower the knowledge barrier in this field. Developers can directly study and draw on Claude Code’s architecture design, prompt logic, and tool-calling mechanisms, shortening the exploration cycle required for independent R&D.
At the same time, this incident also unexpectedly confirms Anthropic’s technical accumulation in agent engineering—whether it’s the multi-agent coordination mechanism or the design of a persistent background guardian process—both demonstrate engineering depth beyond comparable products.
As an Anthropic ecosystem extension tool, Claude Code mainly targets professional developers and competes with AI coding assistants such as GitHub Copilot and Cursor. Whether the release of its source code can, amid intensifying competitive pressure, reverse-accelerate the industry’s collective innovation in AI Agent architectures is something the industry is closely watching its next responses to.