Velocity Is Dead: AI-Generated Compilers and the Future of Software
#AI

Velocity Is Dead: AI-Generated Compilers and the Future of Software

Tech Essays Reporter
5 min read

As AI coding agents generate massive codebases with unprecedented speed, the software industry faces a fundamental shift: quantity is becoming abundant while quality remains scarce. The real competitive advantage now lies not in how fast we can produce code, but in our ability to deliver working software through robust testing, modular architecture, and continuous delivery practices.

The software industry stands at a crossroads where the traditional metrics of success are being fundamentally challenged. When Anthropic recently demonstrated an AI-generated C compiler built with 100,000 lines of Rust code, the tech world celebrated the achievement as a triumph of velocity and scale. The project, which reportedly cost $20,000 in tokens and was described as "mostly" unsupervised, successfully compiled complex applications like the Linux Kernel and DOOM. This moment, while impressive, reveals a deeper truth about where software engineering is headed.

Featured image

The celebration of such achievements, however, misses a crucial point. As Ray Myers, Chief Architect at OpenHands, observed while simultaneously working on his own AI-generated compiler project, the real question isn't about what we can produce, but what we're actually celebrating. Myers' version, built with just $200 and 40,000 lines of Go code, was less complete but cost 1% of Anthropic's effort. Both projects share a common limitation: they're not practically useful despite their impressive scale.

This pattern extends beyond compiler generation. Tools like Cursor have demonstrated the ability to create almost-browsers and almost-spreadsheets, generating millions of lines of code that ultimately don't deliver functional products. The industry finds itself in a peculiar position where velocity has become the default metric, yet it no longer correlates with meaningful progress or value delivery.

The Shift from Quantity to Quality

The transformation we're witnessing isn't about whether AI can generate code—that capability has been established and is rapidly improving. The real shift is in understanding what matters when code generation becomes essentially "free." The past few years have shown remarkable progress: from the effective code generation of 2023 to the autonomous agents of 2024, and now to the sophisticated coding agents that can tackle complex projects with minimal human intervention.

However, this technological leap forward exposes a fundamental misalignment in how we measure and reward software development. We've built entire systems around measuring output—dashboarding it, incentivizing it, and optimizing for it. When AI maximizes output, we get exactly what we've been asking for: more code. But as output becomes abundant, having more of it stops creating competitive advantage.

Why Compilers Represent the Best Case

Compilers serve as an instructive example because they represent perhaps the ideal case for AI code generation. Each pass in a compiler is essentially a pure function with well-defined inputs and outputs, making them extremely testable. When you have a reference compiler to use as an oracle, the path to verification becomes clear and straightforward.

This "compiler-like" quality—well-understood domains, simple interaction points, and bulletproof testing strategies—represents the best-case scenario for AI-generated software. In these environments, software can indeed become "radically cheaper" to produce. But most enterprise systems bear little resemblance to this ideal. They feature complex interactions, difficult verification processes, and behavior that isn't fully understood.

The Continuous Delivery Imperative

The answer to this challenge lies in what Myers identifies as the "agent readiness" of a codebase or development lifecycle. This concept aligns closely with Continuous Delivery practices, which require the ability to ship frequent changes safely. The prerequisite for true Continuous Delivery, as defined in Minimum Viable CD, is that the pipeline decides the releasability of changes, and its verdict is definitive.

Teams that consistently ship faster invest in the foundations that make this possible: comprehensive tests, modular architecture, type-safety, and static analysis. This "shift-left" philosophy doesn't just make it easier to be confident in agentic contributions at release time—it also helps agents run more effectively by providing environment feedback, or what Myers calls "back pressure."

Breathing Life into Legacy Systems

For most enterprise environments that don't resemble the compiler ideal, the path forward involves traditional software engineering practices enhanced by AI capabilities. This means backfilling missing tests, isolating logic from side-effects, and capturing the understanding of current behavior in ways that both humans and agents can use.

These aren't AI techniques per se, but AI can certainly accelerate the process. The goal is to transform complex, poorly understood systems into ones that can benefit from the same velocity advantages that compilers enjoy. This requires investment in the fundamentals of software quality rather than just the quantity of output.

The New Competitive Landscape

In a world where software that "almost works" is becoming free, the competitive advantages shift dramatically. A product that works—and a process that works—will be what sets organizations apart. This means focusing on what matters and making it well, rather than simply making more of what might not work.

The death of velocity as a meaningful metric doesn't mean the end of progress. Instead, it marks the beginning of a more mature phase in software development where the focus returns to delivering value rather than just producing code. Organizations that recognize this shift and invest accordingly will find themselves with a significant advantage in the emerging landscape.

Looking Forward

The future of software engineering isn't about abandoning AI tools or returning to manual coding practices. Rather, it's about using these powerful capabilities wisely, focusing them on areas where they can create genuine value rather than just impressive demonstrations. It's about building systems that can actually benefit from AI assistance, rather than forcing AI to work within poorly structured environments.

As we move forward, the most successful organizations will be those that understand this fundamental shift. They'll invest in the practices and infrastructure that make their codebases "agent-ready," focusing on quality, testability, and delivery reliability. They'll measure success not by the volume of code produced, but by the value delivered to users.

Velocity is dead. Long live delivery.

Velocity Is Dead: AI-Generated Compilers and the Future of SoftwareFeb 18, 2026

This post reflects on what really matters in a world where velocity of code creation is no longer the bottleneck in software engineering. The industry's celebration of AI-generated codebases, while impressive, misses the crucial point that quantity without quality creates little value. The future belongs to organizations that can harness AI's capabilities while maintaining focus on delivering working software through robust testing, modular architecture, and continuous delivery practices.

Comments

Loading comments...