Gate News, March 17 — Sentry co-founder David Cramer posted today on X platform stating that he is “completely convinced” that large language models are not currently tools for net productivity enhancement. Cramer believes that while LLMs lower the barrier to entry for development, they continuously generate increasingly complex and difficult-to-maintain code, which from his own experience is slowing down long-term development. He specifically questions the “agentic engineering” approach, where models automatically generate code and deploy it directly, arguing that the quality of the output code is significantly worse and, after accumulating in large quantities, becomes a net burden. He points out specific issues including poor performance during incremental development in complex codebases, inability to generate interfaces that conform to idiomatic language styles, and “pure slop test generation.” Cramer particularly mentions the OpenClaw tool, stating, “If I had to bet, tools like OpenClaw, because they generate too much code, are already beyond recovery,” and emphasizes that “software is still very hard to build; it has never been about minimizing or maximizing lines of code.” Cramer adds that his judgments are mainly based on his experience developing features within mature codebases of normal complexity; the recent increase in his contributions is due to “finding it interesting” rather than “becoming easier,” believing that this is fundamentally a psychological change, with no essential difference in actual time spent.