Software engineer Jess Schirach critiques the industry's rush toward LLM-generated code, arguing it promotes unmaintainable software, erodes developer skills, and creates accountability gaps.

As AI-assisted coding tools proliferate, software engineer Jess Schirach has published a critique warning against uncritical adoption of LLM-generated code. Drawing parallels between current trends and historical industrial shifts, Schirach argues that outsourcing coding to AI risks creating unmaintainable systems while removing the intellectual engagement that defines quality engineering.
The Mechanization Fallacy
Schirach directly challenges comparisons between AI code generation and historical industrialization: "Mechanization produced consistent outputs through deterministic processes. LLMs output non-deterministic, often hallucinatory code with opaque decision-making." She extends the analogy to fast fashion - superficially functional but structurally flawed, environmentally costly, and prone to replicating existing flaws in training data.
Unlike industrial automation that standardized production, AI-generated code introduces variability that complicates debugging. Schirach notes: "When mechanical systems failed, engineers could examine components. With LLMs, we can't audit reasoning paths behind generated code."
Beyond Abstraction Layers
The argument that LLMs represent "just another abstraction layer" similarly falls short according to Schirach: "Higher-level languages abstract implementation details but preserve developer intentionality. LLMs can't reason about system architecture - they statistically reassemble patterns without understanding."
This creates what Schirach terms "the accountability gap": "When humans outsource thinking to systems that can't think, nobody is thinking about architectural consequences. Look at the Horizon Post Office scandal - thirteen deaths resulted from unexamined software flaws."
Quality Degradation Loop
Training data quality presents another fundamental limitation: "LLMs ingest vast quantities of human-written code, replicating our worst practices. We've created what Meredith Broussard termed 'human centipede epistemology' - AI regurgitating errors that become future training data."
Schirach points to real-world consequences: "Examine any web application's accessibility failures, performance bottlenecks, or discriminatory algorithms. These aren't AI-specific failures - they're amplified human failures."
The Review Process Breakdown

Schirach highlights how AI impacts code review practices, referencing a slide from Jessica Rose and Eda Eren's FFConf presentation: "Human-written PRs represent reasoned decisions. Generated PRs shift accountability entirely to reviewers."
She describes emerging workflows where agents generate PRs via Slack: "Now one person can prompt, generate, and approve changes - eliminating knowledge sharing and reducing accountability. Open source maintainers already face floods of low-quality generated PRs."
Responsible Integration
Despite criticisms, Schirach acknowledges valid use cases: "As 'spicy autocomplete' for prototyping or debugging, these tools have value. Mikayla Maki's framework makes sense: treat agents like untrusted external contributors, only delegate tasks you fully understand."
The core concern remains skill preservation: "What happens when we stop practicing coding? We lose the ability to recognize flawed outputs. We must remember why we enjoyed engineering - the intellectual challenge of solving problems, not prompt engineering."
Schirach concludes: "This isn't anti-progress - it's anti-hype. We need to stop generating and start thinking. Code you didn't write is code you don't understand. Code you don't understand is code you can't maintain."

Comments
Please log in or register to join the discussion