Zendesk argues that generative AI has shifted the software delivery bottleneck from writing code to 'absorption capacity' - the organizational ability to integrate, verify, and derive value from rapidly generated code.
Zendesk has introduced a provocative new framework for understanding software delivery in the age of AI, arguing that generative coding tools have fundamentally shifted the industry's bottleneck from code production to what they call "absorption capacity." This reframing challenges conventional wisdom about productivity gains from AI-assisted development and suggests that organizational design, not technical capability, now determines delivery velocity.
The New Bottleneck: Absorption Capacity
In a detailed engineering blog post, Zendesk's Bence A. Tóth argues that generative AI has made code abundant to the point where implementation is no longer the primary constraint on software delivery. Instead, the limiting factor has become the organization's ability to define problems clearly, integrate changes into broader systems, verify correct behavior, and convert implementation into dependable value.
Tóth draws on manufacturing analogies to illustrate the concept: improving one part of a system doesn't increase total throughput if another constraint remains. In software delivery, he contends, generative AI has lowered the cost of producing code enough that implementation is no longer the narrowest constraint. The bottleneck has shifted upstream to problem definition, architectural coherence, and verification capacity.
Four Pillars of Absorption Capacity
Zendesk proposes four practical responses to address this new constraint:
1. Shared Problem Framing Rather than treating problem definition as a one-way handoff from product to engineering, Zendesk advocates for collaborative problem framing. This approach recognizes that ambiguous requirements can now produce plausible but misaligned implementations at scale. When AI can generate working code from vague prompts, the quality of problem definition becomes critical.
2. Low-Cost Confidence Building Teams should strengthen verification loops through multiple mechanisms: CI signals, static analysis, security checks, observability, staged rollouts, and rapid product feedback after deployment. The goal is to establish confidence in changes quickly and reliably, preventing the accumulation of technical debt from unchecked AI-generated code.
3. Architectural Scaffolding Clear boundaries, consistent naming conventions, templates, lightweight Architecture Decision Records (ADRs), and guardrails enforced in CI become essential infrastructure for AI-assisted delivery. These structures provide the context and constraints that help AI-generated code integrate coherently with existing systems.
4. Throughput Over Output Zendesk recommends measuring throughput rather than output, favoring metrics such as lead time, review queue time, change failure rates, rollbacks, and incident load over traditional vanity metrics like lines of code or pull request volume. This shift recognizes that the value of AI-generated code lies in its integration and impact, not its volume.
The Architectural Amplification Effect
The most significant insight from Zendesk's analysis is that AI will scale whatever structures already exist in the codebase and delivery workflow. In systems with clear module boundaries, documented invariants, and well-understood implementation paths, AI can accelerate work while remaining easier to direct and verify. Conversely, in systems with ambiguous conventions or architectural drift, the same acceleration can amplify inconsistency, increase review burden, and weaken trust in changes that may look locally correct while degrading the system more broadly.
This "amplification effect" suggests that organizations with mature architectural practices may see outsized benefits from AI-assisted development, while those with technical debt or unclear conventions may experience accelerated deterioration.
Industry Context and Implications
Zendesk's perspective aligns with recent observations from other major technology companies. Agoda similarly argued that coding was never the real bottleneck, and that specification and verification become more important as implementation accelerates. However, Zendesk pushes this argument further by naming the replacement constraint and framing it as an organizational design problem.
For architects and engineering leaders, the implication is clear: the advantage may not go to teams that generate the most code, but to those that can safely absorb more meaningful change. This suggests that investments in architectural clarity, verification infrastructure, and collaborative problem-solving may yield higher returns than investments in AI coding tools alone.
The Human Element in AI-Accelerated Delivery
The shift from code generation to absorption capacity fundamentally changes the role of human developers. Rather than focusing on writing code, developers must become skilled at problem definition, architectural integration, and verification. This evolution requires different competencies and may favor experienced engineers who understand system context over junior developers who excel at implementation details.
Organizations must also reconsider their team structures and workflows. If problem framing becomes a shared responsibility between product and engineering, traditional handoff models may need to evolve. If architectural scaffolding becomes essential for AI-assisted delivery, architectural review and maintenance may require greater emphasis.
Looking Forward
Zendesk's framework suggests that the next wave of productivity gains in software development will come not from better code generation, but from better absorption capacity. Organizations that invest in clear problem definition, robust verification, architectural clarity, and meaningful metrics may find themselves able to absorb and benefit from AI-generated code far more effectively than competitors.
This perspective also suggests that the gap between high-performing and low-performing engineering organizations may widen. Teams with mature practices around architecture, verification, and collaboration may see dramatic productivity improvements from AI tools, while teams struggling with technical debt or unclear processes may find that AI accelerates their problems rather than their solutions.
As generative AI continues to evolve, the question for engineering leaders is no longer how to generate more code, but how to build organizations capable of absorbing and integrating code changes safely and effectively. The bottleneck has shifted from the technical to the organizational, and success will depend on addressing this new constraint.


About the Author
Eran Stiller is Cartesian's Chief Software Architect based in Melbourne, Australia. As a seasoned software architect and CTO, Eran designed, implemented and reviewed various software solutions across multiple business domains. Eran has many years of experience in the software development world and a track record of public speaking and community contribution.

Comments
Please log in or register to join the discussion