Former Tesla AI chief, and OpenAI founding member Andrej Karpathy shared on X the core methodology he personally uses for large language models (LLMs): the biggest value of an LLM isn’t helping you “cut writing,” but helping you “upgrade your reading.” He proposes a three-tier reading process that positions the LLM as a “reading amplifier.” This view challenges the mainstream mindset that most people have—that AI is mainly a writing accelerator.
Three-tier reading method: from the original text to LLM meta-analysis
The information-processing workflow described by Karpathy consists of three layers: the first is reading the original document itself; the second is having the LLM generate a summary to quickly grasp the core arguments; the third—also the most crucial layer—is asking the LLM to perform “meta-analysis,” evaluating which viewpoints in the document are “novel” or “surprising” to your existing knowledge framework.
The brilliance of this method is that it doesn’t replace human judgment with AI; it uses AI to improve how humans allocate their attention. When you need to process large amounts of information every day, the third layer’s novelty filtering can effectively help readers focus on content that truly deserves deeper reading.
Why “reading amplification” is more important than “writing acceleration”
Most people’s main use cases for ChatGPT or Claude involve generating text—writing emails, writing reports, writing code. Karpathy’s view is the opposite: he believes the value of LLMs at the input end (helping you absorb information better) far exceeds the value at the output end (helping you produce text faster).
The underlying logic is this: in knowledge work, the quality of decisions depends on the quality of information absorption. If you read the right things and understand the key points, your output naturally follows. Conversely, if you only speed up output with AI while the input quality remains unchanged, at best you’re just “producing mediocre content faster.”
Risks and blind spots: you need enough domain knowledge as backing
This approach assumes a prerequisite: users themselves need to have sufficient domain knowledge to determine whether the LLM’s analysis is correct. If someone completely unfamiliar with blockchain asks the LLM to assess the “novelty” of a DeFi paper, they are very likely to be misled by a confident but mistaken summary produced by the LLM.
In addition, there are researchers who hold different opinions, arguing that the LLM’s writing ability is the biggest productivity boost, while reading support is relatively secondary. The disagreement between these two perspectives essentially reflects different weights different work styles place on “input vs. output”—research-oriented work needs a reading amplifier more, while execution-oriented work needs writing acceleration more.
Implications for knowledge workers
Karpathy’s framework offers a practical way for all knowledge workers who consume large volumes of information to think about using AI: rather than letting AI write for you, let AI help you build a pipeline for “input quality control.” Concretely, you could be like this: every day, use an LLM to scan more than 20 industry articles, have the AI flag which viewpoints are novel, and then use your own judgment to decide which are worth deeper reporting or research. This approach won’t make you lose your judgment; instead, in an era of information overload, it helps limited attention go to what truly matters.
This article about Karpathy’s three-tier LLM reading method—AI’s greatest value isn’t in writing, but in helping you understand the world—first appeared on Chain News ABMedia.