Fragments: January 22 - AI Energy, Agentic Rigor, and the Erosion of Constitutional Norms
#AI

Fragments: January 22 - AI Energy, Agentic Rigor, and the Erosion of Constitutional Norms

Backend Reporter
4 min read

This week's fragments connect the technical realities of AI's energy consumption with the human discipline required to wield it effectively, while also examining the political parallels between authoritarian trends in the U.S. and abroad. The common thread is the need for rigorous systems thinking—whether in code, energy accounting, or governance.

The latest edition of Martin Fowler's "Fragments" series weaves together threads from the worlds of AI development, energy accounting, and political analysis. The result is a pragmatic look at how systems—technical, environmental, and governmental—require discipline and transparency to function properly.

Featured image

The AI Energy Question: Beyond the "Typical Query"

Simon Couch's analysis of AI's electricity consumption cuts through the vague estimates that often dominate public discourse. His personal audit reveals a stark reality: his typical development session, running a few Claude Code instances for a few hours, consumes approximately 1,300 Wh. This equates to the energy of roughly 4,400 "typical queries"—a metric that highlights the inadequacy of per-query energy estimates for understanding real-world usage patterns.

The key insight here isn't that the consumption is catastrophic (it's comparable to running a dishwasher), but that our measurement frameworks are insufficient. As Couch notes, this is "napkin math" because we lack robust data on how these models actually use resources. This isn't merely an academic concern; it's a systems design problem. Without accurate energy accounting, we cannot optimize for efficiency, set meaningful sustainability goals, or even understand the true cost of our development workflows.

For distributed systems engineers, this mirrors classic challenges in resource monitoring. Just as we instrument applications to track CPU, memory, and I/O, we need similar observability for AI workloads. The absence of this data is a critical gap in our tooling stack.

Agentic Coding: Relocating Discipline, Not Abandoning It

Chad Fowler's perspective on agentic coding draws a parallel to the discipline introduced by Extreme Programming (XP). The core thesis is that new capabilities don't eliminate the need for rigor—they shift where that rigor is applied.

In the XP era, discipline manifested as:

  • Rigorous testing practices
  • Continuous integration
  • Codebase health maintenance

With AI-enabled development, the discipline must relocate to:

  • Specification precision: Treating generation as a capability that demands clearer, more precise requirements
  • Evaluation systems: Building validation mechanisms that are harder to fool than their predecessors
  • Progress metrics: Refusing to conflate velocity with meaningful progress

This is a fundamental systems thinking exercise. The introduction of a new component (AI code generation) changes the failure modes and optimization points of the entire development system. The engineers who thrive will be those who recognize that the locus of complexity has moved, not vanished. They'll invest in better specifications, more comprehensive evaluation, and clearer definitions of "done."

This aligns with Fowler's own view: "They’ll treat generation as a capability that demands more precision in specification, not less."

The Constitutional Erosion: A Systems Failure

Fowler's political analysis, while outside typical technical domains, shares the same analytical framework. He references Noah Smith's assessment of ICE and CBP actions in Minnesota, noting a "consistent record of brutality, aggression, dubious legality, and unprofessionalism." The parallel to authoritarian systems is explicit: "When a federal officer gives you instructions, you abide by them and then you get to keep your life" is a perfect description of an authoritarian police state.

The systems thinking here is about institutional design and accountability. Constitutional systems rely on checks, balances, and transparency. When these mechanisms fail—when "constitutional Republicans" become "absent or quiescent"—the system degrades. Fowler's worry is that Minneapolis represents not an anomaly but a harbinger: "I fear that what we’ve seen in Minneapolis will be a harbinger of worse to come."

He draws a direct parallel to Venezuela's trajectory, suggesting Trump as a "Hugo Chávez figure" and asking: "who is Trump's Maduro?" This isn't political commentary for its own sake; it's an observation about institutional decay and the long-term consequences of norm erosion.

Connecting the Threads: Systems Require Oversight

The through-line across these fragments is the necessity of rigorous observation and accountability:

  1. AI Energy: We need better instrumentation to understand the true costs of our tools.
  2. Agentic Development: We need better evaluation systems to maintain quality as generation capabilities expand.
  3. Constitutional Governance: We need better oversight mechanisms to prevent institutional decay.

In each case, the absence of proper measurement and accountability leads to suboptimal or dangerous outcomes. The dishwasher comparison for AI energy is telling—it's not that the consumption is inherently bad, but that we're operating without proper visibility. Similarly, the danger in agentic coding isn't the technology itself, but the temptation to mistake velocity for progress. And in governance, the danger isn't change itself, but change without accountability.

Looking Forward

Fowler's closing note about sharing learnings from Thoughtworks' AI/works™ platform suggests a commitment to transparency. The platform is "in its early days," but the intent to share what's learned indicates a recognition that these systems—whether technical platforms or political institutions—improve through open examination and iteration.

The fragments from January 22 paint a picture of a world where systems thinking is more critical than ever. Whether we're optimizing AI energy consumption, building evaluation systems for generated code, or safeguarding constitutional norms, the principles remain consistent: measure carefully, design for accountability, and never mistake activity for progress.

The challenge ahead is to maintain this rigor across all domains—technical, environmental, and civic—before the costs of our current opacity become irreversible.

Comments

Loading comments...