When the Prompt Is the Program: Rethinking Source Control in the Age of AI

It started with a paradox: a programmer who hates Boston yet runs "The Boston Diaries" blog. But Sean Conner’s latest entry isn’t about geography—it’s a deep dive into "vibe coding," a term he uses to describe the act of surrendering to AI-generated code. His central question cuts to the core of modern development: if AI writes the code, what exactly is the "source" we commit to version control?

Article illustration 1

Conner defines "vibe coding" as "where you fully give in to the vibes, embrace exponentials, and forget that the code even exists." He recounts encountering projects—some commercial—built this way, where developers treat AI prompts as the primary artifact. This mirrors his own experience with a PEG (Parsing Expression Grammar) parser: he checks in the PEG code, not the generated C "sludge," because the PEG is the true source. Similarly, domain-specific languages (DSLs) that compile to Rust or C are version-controlled in their high-level form, not their output.

"So how is that any different from 'vibe coding'?" Conner asks. "It's not the output that you necessarily care about, but the input. When 'vibe coding,' the source code is the prompt or prompts."

His argument challenges conventional wisdom. Proponents of AI-assisted coding often insist on committing the generated code for reproducibility. But Conner counters this by invoking AI's promised endgame: a future where developers describe systems in natural language (e.g., "write a CMS updated via email"), and AI handles the rest. In this world, the prompt is the specification—the equivalent of a DSL. Checking in bulky, opaque generated code undermines AI’s value, much like storing assembly instead of C source would today.

Yet, the implications are thorny. Reproducible builds become a nightmare if prompts yield different outputs across AI model versions. Conner sarcastically notes this could be "solved by AI," highlighting a circular dependency. He admits finding this future "horrifying," fearing it erodes developer control and transparency. But as AI tools like GitHub Copilot evolve, his analogy forces a critical discussion: are we elevating abstraction so far that code becomes a black box?

This isn’t just philosophical—it’s practical. Teams using prompt-driven tools must now decide: do they version prompts, generated code, or both? Conner’s PEG parallel suggests prompts deserve first-class status, but only if treated with the rigor of traditional code (e.g., prompt testing, versioning). Otherwise, "vibe coding" risks becoming vibes without accountability. As AI reshapes development, defining "source" may be the next frontier in engineering discipline.

Source: The Boston Diaries by Sean Conner