As AI tools reshape software development, engineers are shifting from creation to verification, adopting new maturity models, and rethinking architecture. This examines the emerging patterns in AI-assisted software engineering and what they mean for the future of the profession.
The Middle Loop: AI's Transformation of Software Engineering

The rapid advancement of AI in software development is fundamentally changing how engineers work. Recent research and expert observations reveal a significant shift from creation-oriented tasks to verification-oriented work, the emergence of new architectural patterns, and evolving maturity models for AI adoption. These changes aren't just incremental improvements—they represent a paradigm shift in what it means to be a software engineer.
The Shift to Supervisory Engineering Work
Annie Vella's research into how professional software engineers use AI reveals a critical transformation: engineers are spending less time on creation and more on verification. But this verification isn't the traditional code review or testing we've known—it's something new, which Martin Fowler terms "supervisory engineering work."
Supervisory engineering involves the effort required to direct AI, evaluate its output, and correct it when it's wrong. This concept introduces a new "middle loop" in the software development process, positioned between the traditional inner loop (writing code, testing, debugging) and outer loop (commit, review, CI/CD, deploy, observe).
As AI automates the inner loop—generating code, running build-test cycles, and debugging—engineers increasingly find themselves supervising this work rather than performing it directly. This represents a traumatic change to what engineers do and the skills they need, though not necessarily "the end of programming"—rather a redefinition of what programming means.
This shift has created genuine uncertainty among software engineers about their career futures. The skills they've developed and honed over years are being commoditized, while narratives about AI either threatening jobs or suggesting engineers should "move upstream" into architecture provide little practical guidance for daily work.
Maturity Models for AI Agent Adoption
As AI's coding capabilities outpace our ability to wield them effectively, engineers and researchers are developing frameworks to understand the progression from basic AI usage to sophisticated integration.
Bassim Eledath outlines 8 levels of Agentic Engineering:
- Tab Complete
- Agent IDE Context
- Engineering
- Compounding Engineering
- MCP & Skills Harness
- Engineering
- Background Agents
- Autonomous Agent Teams
Similarly, Steve Yegge proposed eight levels in "Welcome to Gas Town":
- Zero or Near-Zero AI: basic code completions and occasional Chat questions
- Coding agent in IDE with permission controls
- Narrow coding agent in sidebar with permission controls
- Agent in IDE, YOLO mode: permissions turned off, wider capabilities
- In IDE, wide agent: gradually fills the screen, code becomes diffs
- CLI, single agent, YOLO mode
- CLI, multi-agent, YOLO mode: 3-5 parallel instances
- 10+ agents, hand-managed: building custom orchestrators
These models, while not entirely precise, provide useful frameworks for understanding how organizations and individuals are progressing in their AI adoption journeys. The gap between AI capabilities and effective practice closes gradually through these levels, as evidenced by teams that can ship products like Anthropic's Cowork in 10 days versus those struggling with broken proofs of concept.
Regenerative Software Architecture
Chad Fowler challenges us to rethink our approach to code generation in an AI-augmented world. He argues that the real constraint has shifted from producing code to replacing it safely. "Regenerative software," as he describes it, doesn't work when the unit of generation is an entire application. Instead, regeneration works best when the unit of generation is a component that can be compiled into a system architecture.
Fowler outlines several architectural constraints that facilitate safe component replacement:
- A small number of communication patterns
- Clear ownership of data ("exclusive mutation authority for each dataset to a single component")
- Clear evaluation surfaces that allow behavior verification independent of implementation
- The right size of components (natural grain), determined by data ownership boundaries and evaluation surfaces
These principles align with long-standing goals in software architecture—dividing complex systems into networks of replaceable components. In the era of agentic engineering, these principles become even more critical as AI systems generate and modify code at unprecedented speeds.
Code Review and Evaluation in the AI Era
Ankit Jain provocatively suggests that humans should neither write nor review code in the AI era. He points out that humans already struggle to keep up with code review at human writing speeds, resulting in PRs sitting for days, rubber-stamp approvals, and reviewers skimming large diffs.
Jain proposes a shift to layers of evaluation filters:
- Compare Multiple Options
- Deterministic Guardrails
- Humans define acceptance criteria
- Permission Systems as Architecture
- Adversarial Verification
While this vision represents a radical departure from current practices, it acknowledges the reality of information overload in modern development environments. However, the notion that "the code doesn't matter" sits uncomfortably with many experienced engineers who find that precise, understandable code often serves as the clearest expression of intent.
The Evolving Role of Software Engineers
As AI transforms software development, engineers are finding their roles shifting in several directions:
- Supervisors: Directing AI systems, evaluating their outputs, and correcting errors
- Architects: Designing systems that can be easily regenerated or modified by AI
- Trainers: Teaching others how to use AI tools effectively
- Ethicists: Ensuring AI-generated code meets quality and ethical standards
The educational system is also adapting, as evidenced by the shift from treating AI use as a disclosure problem to a subject of instruction. Students are now learning how to use AI tools to improve their work—prompting for research without copying output, identifying when summaries drift from sources, and developing critical evaluation skills.
Preparing for the AI-Augmented Future
The transformation of software engineering through AI is neither fully realized nor completely understood. What's clear is that the middle loop—where engineers supervise AI performing what they used to do by hand—represents a new frontier in development practice.
For organizations and individuals navigating this transition, several strategies emerge:
- Develop frameworks for evaluating AI-generated code
- Architect systems for regenerative capabilities
- Create clear evaluation surfaces and data ownership boundaries
- Progress through maturity models systematically
- Focus on supervisory skills rather than purely creative ones
- Experiment with different levels of AI autonomy
As Martin Fowler notes, there is still plenty of engineering work in software engineering, even if it looks different from what most engineers trained for. The supervisory engineering work and the middle loop provide one framework for understanding what this different future looks like, grounded in what engineers are actually experiencing.
The servant leadership of the future may involve serving AI agents by providing clear direction and evaluation criteria—a reversal of traditional human-to-machine relationships that reflects the profound changes underway in software development.
This transformation presents both challenges and opportunities. For those willing to adapt and develop new skills, the AI-augmented future offers the potential to tackle more complex problems and create more sophisticated systems than ever before. For those who resist change, the transition may be more difficult. The middle loop represents not just a new workflow, but a new way of thinking about the relationship between human creativity and machine capability in software engineering.

Comments
Please log in or register to join the discussion