The former founder of GitHub received $17 million from a16z to create Git for the Agent era

Writing by: Leo

Have you ever thought that programming might be completely changing? Developers are shifting from simply using AI tools to viewing AI as a new foundational layer for building software. This is not a minor adjustment but a fundamental paradigm shift. Think about it—those core concepts we’ve always taken for granted—version control, branching, code review, and even the definition of “collaboration”—are being redefined because of AI agent-driven workflows. What’s even more shocking is that Git, which we use every day, was originally designed as a patch workflow for mailing lists from 20 years ago, and now it has to serve scenarios where human developers and a group of AI agents work simultaneously.

This is why the news that GitButler just raised $17 million in Series A funding made me pause and think seriously. The round was led by a16z, with Fly Ventures and A Capital following on. Even more interesting is that Scott Chacon, CEO of GitButler, is one of the co-founders of GitHub and authored the book “Pro Git,” which almost every developer has read. Someone who has already achieved great success in version control—why return to the startup track and rethink this seemingly “solved” problem? In the announcement, he straightforwardly said: “We’re not building a ‘better Git’; we’re building the next-generation infrastructure for software development.” Behind this statement lies a profound insight into the future of software development.

The 20-year dilemma of Git: Designed for mailing list workflows

I’ve found that many people don’t understand the historical background of Git. Git was originally created by the Linux kernel team in 2005, and its design philosophy is deeply rooted in the Unix tradition. Scott mentioned an interesting detail in an interview: the core Git team never intended to make a user-friendly interface. They followed the Unix philosophy, building a series of low-level “pipe commands,” each doing one simple thing, which could then be chained together with Perl scripts to do anything you wanted. This design made sense at the time because they assumed only technical experts like the Linux kernel team would use this tool.

What happened later is well known. A developer named Pasquy wrote some Perl scripts to wrap Git with a unified user interface—that is, the CLI commands we use now. These scripts became increasingly popular and were eventually merged into the Git core as the so-called “porcelain” layer. Interestingly, these commands have remained largely unchanged since 2005 or 2006. They were originally written in Perl, later rewritten in C, but the core logic and user interface have stayed almost the same. Scott said that the commands he described in the first edition of “Pro Git” in 2009 are still fully usable today.

This stability is somewhat positive. The Git team places great importance on backward compatibility; they are reluctant to remove existing features for fear of breaking workflows. But this also introduces a fundamental problem: the core assumptions built into Git’s original design are now seriously out of sync with modern software development practices. Git was designed for sending patches via mailing lists. Back then, developers would make local changes, generate a patch file, send it via email to maintainers, who would review and decide whether to accept it. The entire process was asynchronous, text-based, and single-threaded.

And now? We have continuous integration, continuous deployment, distributed teams working in real-time, code review tools, and various automated testing and deployment pipelines. More importantly, AI agents are now writing code at scale. Scott pointed out an impressive observation: we are now teaching a group of AI agents to use a tool originally designed for mailing list patches. This mismatch is like letting a Tesla drive on roads built for horse-drawn carriages.

Git’s Unix philosophy design also brings another problem: it tries to serve both computers and humans with a single interface. For example, running “git branch” by default only gives you a list of branches, with no user interface. This is because Git needs to ensure that the output can be read by humans and parsed by other programs. This compromise results in: Git is not very user-friendly for humans, nor optimized for computer programs. Although some commands offer a “–porcelain” option for machine-readable output, it’s not standard practice, and many commands lack this option altogether.

New challenges in the AI agent era: one working directory is no longer enough

As AI begins to participate massively in programming, Git’s limitations become even more apparent. I’ve recently tried working with multiple AI agents simultaneously and found that Git’s basic assumption—a single developer, a single branch, a linear workflow—is completely obsolete. Modern developers don’t work linearly. You might run multiple agents at once: one fixing UI bugs, another optimizing database queries, a third updating documentation. But Git’s index system collapses under this parallel editing because it assumes your local working copy represents a single, atomic change to the codebase.

The traditional solution is to use worktrees—creating multiple copies of the repository for each parallel task. But this introduces new problems. If you have five agents working simultaneously, you need five full copies of the working directory. Although Git has optimized storage, this still means a lot of file duplication and disk space consumption. More importantly, these agents are completely isolated; they don’t see what others are doing until they finish and attempt to merge, at which point conflicts can be costly to resolve.

GitButler’s proposed solution is parallel branches. This is a design that really caught my eye. Parallel branches are like regular branches, but you can have multiple open at once. You get the benefits of worktrees (logical isolation) without copying all files. All agents operate within the same working directory, but their modifications are assigned to different virtual branches. Scott described a compelling scenario: they had two agents working simultaneously, both wanting to edit the same file but with incompatible changes. The result? One agent automatically stacks its branch on top of the other’s, then continues working and commits on its own stacked branch. This intelligent conflict handling is almost impossible in traditional Git workflows.

I especially appreciate an experiment by the GitButler team, although they ultimately didn’t adopt it. They tried giving multiple agents a chat channel to communicate what they were doing. Scott said this feature looked super cool—they could see the agents’ conversations and were eager to release it. But after extensive testing, they found it didn’t help. Agents would detect others modifying a file, infer the reason, and adjust their work strategies accordingly. They didn’t need explicit communication because the overhead of communication slowed everything down. This insight is very enlightening: we can’t simply transplant human collaboration patterns onto agents; agents have their own ways of working.

Redesigning user interfaces: for humans, for agents, for scripts

GitButler’s recently released CLI tool has piqued my interest greatly. It’s not just a wrapper around Git but a fundamental rethinking of how command-line tools should be designed. Scott mentioned an interesting observation: about 80% of developers still use command-line tools to operate Git, despite the existence of various GUIs. The reason is simple—most Git GUIs are just wrappers around Git commands with a graphical interface, offering little added functionality and often slowing down operations. If you know what command to run, typing it directly is usually faster.

But GitButler’s CLI is different. It offers different output formats tailored to various scenarios. If you run a command normally, it provides optimized, human-readable output, including prompts and suggestions. Add the “–json” parameter, and it outputs structured JSON data for scripts to parse. They’re even considering adding a “–markdown” option, optimized for agents because markdown is easier to inject into an agent’s context.

More interestingly, they optimize tool design based on observing agent behavior. They found that although “–json” is available, agents prefer human-readable output and then pipe it through jq or write Python scripts to extract needed info. Another discovery: after running any modification command, agents almost always run “git status” immediately afterward. So GitButler added a “–status-after” option to all modification commands, which automatically shows the status after execution. This is contrary to traditional Unix philosophy and not ideal for scripting, but perfect for agents.

They’re also exploring how to provide more context to agents via output. For example, including hints like “If you want to do this, run this command.” This isn’t meant for humans, who might find it verbose, but for agents, this extra context helps them decide the next step faster. Scott said this is a very interesting UX challenge because we need to treat agents as a new kind of “user persona,” with needs and behaviors entirely different from humans.

The fundamental change in software development: from writing code to writing specifications

In the interview, Scott shared a thought-provoking idea: the best software engineers in the future might not be those who write the most perfect code, but those who communicate, write, and describe most effectively. This might seem counterintuitive—many of us chose programming to avoid dealing with people, not to engage with them. But upon reflection, this trend makes perfect sense.

When AI agents can generate code efficiently, the bottleneck is no longer implementation details but your ability to clearly describe what you want. Scott shared his workflow: he now spends most of his time writing detailed specifications describing how a feature should work. When a design decision is needed, he asks AI to implement it based on the spec, then tests the result. If there are issues, he revises the spec and asks AI to redo it. This cycle can be very fast because he doesn’t have to write all the implementation code himself.

The beauty of this approach is that you can always do “show and tell.” Traditionally, to validate an idea, you’d write a detailed technical document and persuade team members to read and give feedback. But no matter how detailed, a working prototype is more intuitive. Now, you can quickly generate a prototype, let the team experience it firsthand, and iterate based on feedback. This greatly accelerates the cycle from idea to validation.

But it also introduces new challenges. The bottleneck in team collaboration shifts from “can we implement this feature” to “can we reach consensus on what we want.” Scott said many developers, especially those who consider themselves very smart, think they don’t need to explain what they’re doing—the code itself is the best documentation. But in the AI era, this attitude no longer works. You must be able to clearly articulate your intent and write specifications that both humans and AI can understand. Writing skills become a new superpower.

This also makes me think about the future of code review. Scott posed a sharp question: if you honestly ask most software engineers, do they really read every line of a PR carefully? Do they think through the logic line-by-line? Do they pull the code locally to test? Or do they just skim and approve if it looks fine? Most would choose the latter. This isn’t due to irresponsibility but because thorough code review is costly and often yields limited benefits.

AI agents could change this game. Agents are very good at scrutinizing every line, running tests, and checking for potential issues. They don’t get tired or bored and can maintain consistent standards. Human reviewers could then focus on high-level questions: does this change align with product goals? does it solve real user problems? is the architecture sound? The detailed implementation, syntax, and potential bugs could be handled by AI.

PRs and issues: the collaboration model that’s 20 years old needs to evolve

GitHub’s pull request system has become the standard for open-source collaboration, but Scott believes it has fundamental flaws. PRs are based on branch reviews, not patch reviews. This leads to a lot of “junk commits”—like “fixed a small bug here” or “forgot to add this file.” Because PRs focus on entire branches, the quality of individual commits isn’t emphasized. PR descriptions are key, but they’re not stored in Git history and often get lost after merging.

In the mailing list era, this wasn’t an issue. Each patch had a carefully written commit message, which served as the PR description. Review was patch-based, and the quality of the patch and message was directly related. But in the GitHub era, we’ve lost that constraint. Scott suggests that future code review should return to patch-based review, combined with modern tools. Review should be local—you should be able to run code, test features, and have AI help identify issues. AI can run tests, flag potential problems, and you only need to focus on parts that truly require human judgment.

Another interesting point is about team communication. Scott said that real-time communication between teams has always been a weak spot. If you’re editing the same file as someone else, conflicts only surface during merge, and one person bears the entire burden of resolving conflicts. But what if we could know what others are doing in real time? For humans, this might be too disruptive or costly. But for agents, it’s not a problem. Agents can communicate during idle times, understand what others (or other agents) are working on, and proactively detect conflicts or adjust their work strategies to avoid them.

GitButler’s metadata system is also intriguing. They want to attach conversation logs, agent thought processes, and related context to commits or branches. Currently, Git has limited support for such metadata. These insights could be very valuable for understanding why certain decisions were made or the thought process behind code. But this introduces a big data challenge: even storing text-based metadata can grow rapidly. Scott mentioned they have to leverage techniques from large repositories like Chrome or Microsoft Office teams to handle this scale.

Reflections on this transformation

After reading about GitButler and Scott’s insights, I have some deep thoughts. Software development is undergoing a fundamental paradigm shift, and version control systems, as the infrastructure of software development, must evolve accordingly. Git’s design philosophy was advanced 20 years ago, but now it’s a limiting factor. What we need isn’t just a “better Git,” but a reimagined infrastructure tailored for modern workflows and the AI era.

Scott’s reflection on the “logical endpoint” resonated with me. He said that in language learning startups, many people saw real-time translation tech and claimed language learning was dead. But he argued that even with perfect translation, both parties still need to wear translation devices, and the experience is far from natural. He shared a week working with a translator in Japan—excellent translation, but still not ideal. You wouldn’t want to build deep relationships or complex collaborations through such a medium. The same applies to programming. No matter how powerful AI agents become, they can’t fully replace human judgment, creativity, and communication.

Regarding GitHub’s future, I agree with Scott’s balanced view. GitHub’s biggest strength is its user base; its biggest weakness is being a large corporation slow to pivot. The industry is exploring what the “next GitHub” might be, but Scott points out that the question itself might be misguided. GitHub isn’t the “next” anything; it created a new collaboration paradigm. Similarly, the future might bring a completely different, currently unimaginable mode of collaboration.

I believe the value of GitButler isn’t just in its specific features but in its way of thinking. They challenge assumptions we’ve taken for granted: Why work on only one branch at a time? Why must commits be linear? Why should agents and humans use the same interface? Why must collaboration be through PRs and issues? This first-principles thinking is exactly what we need in this rapidly changing era.

I also realize that as developers, we need to cultivate new skills. Writing clear specifications, communicating ideas effectively, understanding how AI agents work—these might become more important than just coding. It’s a challenge, especially for those who chose programming to avoid human interaction. But it’s also an opportunity: to free ourselves from low-level implementation details and focus on more creative work—defining problems, designing solutions, making trade-offs.

GitButler’s $17 million funding is just the beginning. I believe in the coming years we’ll see more efforts to rethink the foundations of software development. Version control, code review, project management, testing, deployment—all tools designed before the AI era—must be reexamined. Developers and teams who adapt early will gain a huge advantage in this transformation.

Ultimately, software development will become more about communication, collaboration, and decision-making than syntax and implementation details. This might unsettle some traditional programmers, but I see it as a positive shift. It makes programming closer to solving core problems rather than being bogged down by technical minutiae. When we no longer need to remember complex Git commands, manually resolve merge conflicts, or write repetitive code, we can focus on what truly matters: understanding user needs, designing elegant solutions, and creating valuable products. That’s the core of software development—and the direction GitButler aims to help us return to.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin