Anthropic CEO Dario Amodei threw a bombshell last March at an event of the Council on Foreign Relations in the United States: “I think within three to six months, AI will be responsible for writing 90% of the code. And after twelve months, we might enter a world where AI writes almost all of the code.”
Those words sent shockwaves through the tech community. A year later, what’s the reality?
The prophecy deadline has passed, but there’s a gap with reality
Six months later, Amodei reiterated in a conversation with Salesforce CEO Marc Benioff, saying that within Anthropic and among the multiple companies they work with, this (90%) has indeed come true—yet after independent researchers checked it, they believed the claim was not accurate. If you count only officially submitted code, the internal AI-generated proportion at Anthropic may be close to 50%; if you include broader categories of code such as one-off scripts, the figure is closer to 90%.
Google CEO Sundar Pichai previously revealed that more than 25% of the source code at Google is generated by AI. Microsoft CEO Satya Nadella said about 30% of Microsoft’s code is written by AI. The numbers are growing, but they’re still a long way from “90%.”
Engineers’ real reaction: “My job is to go in and clean up the mess”
Regarding Amodei’s prediction, a veteran game engine engineer said plainly that the quality of AI-generated code at this stage has fundamental problems—“the code produced by AI is a version that a better programmer would never write,” because AI tends to stack unnecessary abstraction layers rather than find a simpler solution.
Another engineer poured cold water from a corporate-practical perspective: any company that has existed for more than three years almost certainly has a legacy system full of tribal knowledge, lacking documentation, and with messy naming—“that function library written by an engineer who left ten years ago? AI can’t really handle it.”
This view is supported by data. Stack Overflow’s 2025 developer survey shows that more than 60% (61.7%) of developers have ethical and cybersecurity concerns about AI-generated code, and developers generally do not trust the accuracy of its outputs, often requiring manual fixes.
The structural obstacles that were overlooked: banks, healthcare, and government simply can’t use it
Another blind spot worth watching in the discussion is regulatory constraints. 84% of developers say they use or plan to use AI tools, but this is “broad augmentation,” not full automation. In heavily regulated industries such as banking, aerospace, healthcare, and finance, many still explicitly ban or severely limit the use of large language models to process core code, due to competition confidentiality and compliance risks.
The U.S. Social Security Administration’s administrative system reportedly requires 3,600 independent program components working in coordination, with many written in COBOL. Transforming legacy infrastructure of this kind is extremely complex—far beyond what a large language model can easily take over.
The business-logic contradiction in Silicon Valley
Some commentators point out a satirical logic: “If you can really have AI generate all the code, why are you selling it to others? You should just use it to develop all your own products, monopolize every market.”
In other words, Anthropic’s business model itself depends on engineers continuously purchasing its tools—which, to some extent, makes the claim that “AI is going to put engineers out of work” feel self-contradictory. Engineers are the customers; cutting the customer base isn’t good business.
The prophecy didn’t fall completely flat, but it also didn’t come true
Based on the evidence so far, Amodei’s prediction has sparked a trend of accelerated industry adoption of AI coding tools. But the timeline of “90% within three to six months” appears overly optimistic. Independent researchers estimate that, using reasonable operational definitions, this goal may still take nine to fifteen months to be achieved within Anthropic—let alone across the entire industry.
As for that more fundamental question—whether AI can truly “understand” what code is doing, rather than merely generating strings that look plausible—there is no consensus in the industry. As one engineer put it: “Breakthrough could happen next week, or it might not come for ten years. I’m not in that field, so I can’t predict it.”
This article, Anthropic CEO prediction: AI will handle all code within a year—industry engineers: it basically didn’t happen, first appeared on Chain News ABMedia.