As AI coding tools flood the market, developers face a dangerous paradox: the more they rely on AI-generated code, the more they lose the skills needed to effectively manage these tools. This growing cognitive debt threatens both individual careers and team productivity.
The developer ecosystem is experiencing a profound shift with the rise of 'agentic coding'—a workflow where AI systems generate code while humans take on an 'orchestrator' role. This approach, heavily promoted by AI-first companies, promises increased productivity but carries significant risks that many organizations are only beginning to recognize.
The agentic coding paradigm typically involves developers defining requirements at both micro and macro levels, generating plans, then using AI tools to implement those plans with minimal direct coding. The human's role becomes providing 'good taste,' reviewing outputs, and steering the agents toward execution. This workflow often resembles pulling a slot machine lever repeatedly, with developers iterating through multiple AI-generated versions until satisfactory results emerge.
The Hidden Costs of AI-First Development
While coding assistants offer undeniable benefits, the trade-offs are becoming increasingly apparent:
- Increased system complexity to manage AI's non-deterministic nature
- Skill atrophy across development teams
- Vendor lock-in as teams become dependent on specific AI services
- Fluctuating costs as token consumption varies unpredictably
- Reduced code quality as speed often trumps understanding and conciseness
The most concerning aspect is what Sandor Nyako, Director of Software Engineering at LinkedIn overseeing 50 engineers, calls the 'paradox of supervision.' 'Effectively using Claude requires supervision, and supervising Claude requires the very coding skills that may atrophy from AI overuse,' he explains. Nyako has already requested his team avoid using these tools for 'tasks that require critical thinking or problem-solving.'
Junior Developers Face a Steeper Learning Curve
Junior developers entering the workforce during this AI transition face unique challenges. The traditional path of learning through direct coding and debugging—where mistakes become valuable lessons—is being disrupted. Reviewing AI-generated code, while important, represents only about half of the learning process at best.
'How would someone question if AI is accurate if they don't have critical thinking?' Nyako asks. The concern isn't just theoretical—studies already show significant impacts. An Anthropic research revealed a 'precipitous 47% drop-off in debugging skills' among developers who incorporated AI aggressively into their workflows.
The Vendor Lock-In Dilemma
The recent Claude outage provided a stark demonstration of this dependency. Numerous LinkedIn posts highlighted how entire engineering teams were at a standstill, their workflows having evolved to require constant access to AI services that were suddenly unavailable.
'When you use these fully agentic workflows, the model providers essentially own you,' says Primeagen, reflecting on the growing industry concern. Unlike employee costs which remain relatively stable, token expenses fluctuate unpredictably. Model providers follow a pattern of high benchmarks followed by hype, then reality where users complain of models being 'nerfed' and burning through 2-3x more tokens to achieve the same results.
The Abstraction Fallacy
Proponents often frame AI coding tools as just another layer of abstraction, comparing them to transitions from assembly to FORTRAN or the adoption of higher-level languages. However, this analogy misses a crucial difference: previous transitions involved deterministic systems that maintained a clear relationship between input and output.
'What you say is often not what you mean, and LLMs fill in ambiguity with assumptions (or hallucinations),' explains Dax, creator of OpenCode, an open-source coding agent. This fundamental difference—replacing deterministic systems with probabilistic ones—introduces new challenges that cannot be solved simply by 'moving up the stack.'
A Better Approach: Demote AI's Role
Rather than replacing human coding entirely, Lars Faye proposes a balanced approach that leverages AI's strengths while preserving developer skills:
- Use LLMs for generating specs and plans while maintaining active implementation engagement
- Write pseudo-code when engaging with models to close the distance between request and output
- Generate only what can be thoroughly reviewed in a single sitting
- Never delegate implementation of tasks you couldn't perform yourself
- Use AI as a secondary tool rather than the primary coding mechanism
'Use them like the Ship's Computer, not Data,' Faye suggests, referencing Star Trek characters. 'I'm not going faster, but I'm doing better quality work.'
The Path Forward
As organizations increasingly adopt AI coding tools, the challenge will be maintaining a balance between productivity and skill development. The most successful teams will likely be those that view these tools as enhancements to human capabilities rather than replacements for them.
The stakes are significant. As Jeremy Howard, creator of fast.ai, warns: 'People who go all in on AI agents now are guaranteeing their obsolescence. If you outsource all your thinking to computers, you stop upskilling, learning, and becoming more competent.'
The developer community must navigate this transition carefully, ensuring that the pursuit of productivity doesn't come at the cost of the critical thinking and problem-solving skills that have made software engineering valuable in the first place.

Comments
Please log in or register to join the discussion