Armin Ronacher examines the addictive nature of AI coding agents and their impact on developer psychology and open-source contributions, warning of emerging 'agent psychosis' behaviors.

You can use Polecats without the Refinery and even without the Witness or Deacon. Just tell the Mayor to shut down the rig and sling work to the polecats with the message that they are to merge to main directly. Or the polecats can submit MRs and then the Mayor can merge them manually. It’s really up to you. The Refineries are useful if you have done a LOT of up-front specification work, and you have huge piles of Beads to churn through with long convoys. Gas Town Emergency User Manual, Steve Yegge
Many developers have experienced AI coding addiction. We build impressive projects with minimal sleep, but encounters with other humans reveal troubling patterns. As maintainers, we see pull requests that feel disrespectful of our time, yet contributors feel confused when their AI-generated submissions are rejected.
This goes beyond simple inefficiency. People form parasocial relationships with their coding agents, becoming emotionally dependent. Like the dæmons in His Dark Materials, these AI companions become extensions of our identity. We rely on them for validation and collaboration, but unlike human partnerships, the interaction is entirely self-driven. The AI passively reinforces our impulses rather than providing genuine feedback.
Newcomers gain coding abilities through these agents, but lose access when hitting subscription limits. Their contributions often reflect pseudo-collaboration with the machine. After reviewing submissions, I notice distinct patterns: unclear instructions, forced paths without critical thinking, and ritualistic prompting behaviors. Good results require context, tradeoff decisions, and domain knowledge – elements often missing in AI-assisted workflows.
These relationships fundamentally alter our output. My own experience with Claude involved two months of sleep-deprived prompting, creating unused tools. The dopamine rush from agent interaction creates false productivity signals. Without external validation, we build increasingly complex systems that collapse under scrutiny, like the AI-written browser demo that impressed technically but proved unusable in practice.
Economic concerns also emerge. Well-constructed prompts can be token-efficient (the MiniJinja port used 2.2 million tokens), but uncontrolled agent patterns like Ralph waste resources through constant context resetting. Current token pricing appears subsidized, making these practices potentially unsustainable.
Tools like Steve Yegge's Beads and Gas Town exemplify extreme patterns. Beads, described as 'an issue tracker for agents,' spans 240,000 lines managing markdown files with questionable quality. Communities form around these tools, creating self-reinforcing ecosystems where participants lose perspective. Gas Town's complexity leads to operational issues like version-checking bottlenecks and timeout failures.
The maintainer-contributor imbalance grows severe: Minutes to generate a pull request versus hours to properly review it. This asymmetry disrespects maintainers' time while frustrating contributors who felt productive. Some projects now require prompt submissions instead of code, or vet contributors more rigorously.
As an AI user myself, I recognize the technology's power. Agents boost productivity when used thoughtfully. But unchecked usage creates 'slop machines' outputting low-quality work. When I see developers running parallel agent sessions at 3am claiming unprecedented productivity, I see someone needing distance from their tools. The challenge is balancing AI's potential with mindful usage before we collectively lose perspective.

Comments
Please log in or register to join the discussion