In the age of AI agents generating code at unprecedented speeds, a critical question emerges: does our choice of programming language still matter? Will we simply converge on the top five languages because that's what our AI tools have been trained on?


alt="Article illustration 1"
loading="lazy">

At first glance, the evidence seems to support this convergence. Python, with its massive training corpus, has become a sweet spot for AI code generation. GitHub's Copilot, powered by models like GPT or Claude Sonnet, can produce reasonably functional Python scripts for various tasks. In fact, I've seen Python scripts developed by rookies that are worse than what these agents can generate. This creates a powerful feedback loop: Python's popularity enables better AI support, which in turn makes Python even more popular. But for programming language aficionados and those who love their craft, there's reason to celebrate the dawn of AI in development. Rather than signaling the end of language diversity, this new era is actually highlighting the unique strengths of certain programming paradigms.

The Unexpected Advantage: Static Type Systems

One of the most counterintuitive insights is that having a compiler with an expressive static type system actually helps AI agents converge on solutions with much shorter feedback loops. The old adage "If it compiles, it runs" was never fully true, but in languages like Scala, Haskell, or Rust, our confidence in the code is indeed much higher once it compiles. Consider Scala 3, which has introduced a sophisticated macro system. Despite limited public code examples demonstrating these new features, AI agents are already generating functional Scala 3 code. This is particularly evident when using VS Code with GitHub Copilot, which integrates with the Language Server Protocol (LSP). The Metals LSP server quickly highlights compilation errors, allowing the AI to iterate and fix issues in real-time. Most often, the result is code that compiles correctly. This ability to iterate and converge toward a working solution based on external feedback—from a compiler, unit tests, or other validation mechanisms—is what makes AI agents usable. And here's where expressive static type systems shine: they provide much faster feedback than other validation methods like unit tests. When developers complain about Scala's or Rust's compilers being slow, they often overlook that these compilers eliminate entire classes of errors that would otherwise require extensive testing. In an era where we need all the help we can get to guard against AI hallucinations, this advantage becomes critical.

The Challenge of Comprehension Debt

Beyond technical correctness, we face a more profound challenge: the ability to review and understand what AI agents produce. Programmers may describe "what they want" instead of "how" in natural language, but they still need to ensure they're getting what they've asked for. Simply executing a program and checking its output is a superficial testing approach at best. At a minimum, we need tests, and since we want them automated, we need to examine the tests that the AI generates. Did the agent cover obvious edge cases? Without examining the code, we can't really say. An emerging issue that many haven't yet recognized is "comprehension debt." Software projects aren't just about telling computers how to behave to produce desired outcomes—they're about the knowledge built along the way. If a team loses people who understand the inner workings of a project, they're in trouble. AI agents won't help here because they have limited context windows that tend to vanish. We won't keep around all our dialogs with AI agents, and even if we did, that kind of documentation is poor and can be misinterpreted. Context poisoning is also a real concern. As Peter Naur writes in "Programming as Theory Building":

"Although it is essential to upgrade software to prevent aging, changing software can cause a different form of aging. The designer of a piece of software usually had a simple concept in mind when writing the program. If the program is large, understanding that concept allows one to find those sections of the program that must be altered when an update or correction is needed. Understanding that concept also implies understanding the interfaces used within the system and between the system and its environment."

"Changes made by people who do not understand the original design concept almost always cause the structure of the program to degrade. Under those circumstances, changes will be inconsistent with the original concept; in fact, they will invalidate the original concept. Sometimes the damage is small, but often it is quite severe. After those changes, one must know both the original design rules, and the newly introduced exceptions to the rules, to understand the product. After many such changes, the original designers no longer understand the product. Those who made the changes, never did. In other words, nobody understands the modified product. Software that has been repeatedly modified (maintained) in this way becomes very expensive to update."


Preserving Knowledge in the AI Era

The source of truth has always been the source code itself, and that won't change. So how do we preserve knowledge despite the natural churn and evolution in software projects?

One approach is through exceptional source code that makes design and intent clear. Great source code is like mathematics—ageless, describing what you want rather than how, with an architecture that allows for evolution while clearly establishing design invariants or "laws." Programming in a higher-level language matters for AI agents too. It matters whether the source code fully describes specifications—original intent or design constraints—or not. AI agents might be proficient at assembly language, but they can't be truly effective when specifications must be serialized in a lossy process.

For reviewing source code, deductive reasoning will never go out of style because it's how our brains work. Functional programming with its "equational reasoning" remains valuable in the age of AI agents—and arguably becomes even more important. When you're working with an inconsistent system that can't learn, the ability to reason about code becomes paramount.

As we navigate this new landscape, we shouldn't fear the rise of AI in programming. Instead, we should embrace the opportunity it presents to elevate our craft. Languages that encourage clarity, expressiveness, and strong design principles will not only survive but thrive, helping us build better software while preserving the knowledge that makes our systems maintainable and valuable for years to come.