Matthew Skelton argues that organisational structure, not technical limitations, is the primary barrier to AI adoption success, with bounded agency and knowledge diffusion as key enablers.
At QCon London 2026, Matthew Skelton, co-author of Team Topologies, delivered a compelling argument that the biggest obstacle to successful AI adoption isn't technical capability but organisational maturity. Speaking on the 10th anniversary of the original Team Topologies framework's debut at the same conference, Skelton positioned bounded agency as the essential infrastructure for enabling both human and AI-driven innovation within enterprises.
The 80% Problem: Why Most AI Initiatives Fail
Skelton opened with a stark statistic: as many as 80% of firms report no tangible benefit from their AI investments. The root cause, he argued, isn't inadequate technology but rather organisations' inability to govern delegated agency effectively. This mirrors the challenges companies faced when first adopting DevOps practices—the technology was ready, but the organisational structures weren't.
Bounded Agency: The Foundation of Governable AI
The concept of bounded agency—intentionally constraining authority through rules and guardrails—emerges as the cornerstone of Skelton's framework. Just as stream-aligned teams in Team Topologies operate within clear boundaries to maintain focus and mission clarity, AI agents require similar constraints to function effectively without creating chaos.
This approach directly addresses the Excessive Agency vulnerability (LLM06) identified in the OWASP Top 10 for LLM Applications. Skelton posed a provocative question: "Why would a business grant an agentic AI write access to any data store across the organisation when they would never permit a human to do the same?"
The Security Reality: 86% of Files Go Untouched
Industry research from data security firm Metomic provides sobering context: 86% of files in collaborative environments like Google Drive go untouched for 90 days, yet often remain indexed by AI agents. This creates an enormous attack surface for accidental exposure and data leakage. The solution isn't to abandon AI but to implement the same security boundaries we apply to human teams.
Cognitive Load Parallels: Humans and AI
Skelton drew a fascinating parallel between human cognitive load and AI context windows. Just as humans struggle when their mental capacity is exceeded, AI agents begin to lose coherence or hallucinate when operating outside their defined boundaries. This insight reframes the challenge from technical optimization to organisational design—managing load ensures teams (both human and AI) can act as effective stewards rather than just owners.
Stewardship Over Ownership
The framing of "stewardship" rather than "ownership" represents a subtle but powerful shift in mindset. Skelton suggested that stewardship encourages looking after systems for those who come after, rather than merely possessing a codebase or specific model. This long-term perspective is essential for sustainable AI adoption.
Innovation and Practices Enabling Teams
To scale successful patterns across large organisations, Skelton introduced the Innovation and Practices Enabling Team—a specialised team type that identifies successful patterns within the organisation and shines a spotlight on them to assist other departments. This approach has proven effective at companies like Klarna and the Financial Times, which have established industry benchmarks for internal learning models.
The "Friendly FOMO" Model
JP Morgan provided a major proof-of-concept for knowledge diffusion over mandate. The bank reduced 60% of dependencies in its Athena platform using an opt-in model rather than enforcing top-down rules. By utilising a social dynamic dubbed "friendly FOMO" (fear of missing out), JP Morgan drove adoption of its LLM Suite through shared success rather than mandatory compliance.
Adapt Together: The Next Chapter
These insights underpin Skelton's forthcoming book, Adapt Together, co-authored with Renee Hawkins. The work aims to operationalise value flow much as DevOps transformed software delivery. Skelton concluded that with technology now evolving faster than organisations can learn, active knowledge diffusion and a deep understanding of the systems teams work with are the only viable response to today's cultural and architectural challenges.
Practical Implications for Engineering Leaders
For engineering leaders implementing AI initiatives, Skelton's framework suggests several immediate actions:
Map AI boundaries to existing team structures: If your organisation already uses bounded agency for human teams, extend those same boundaries to AI agents.
Implement strict data access controls: Apply the same "least privilege" principles to AI agents that you would to human employees.
Create enabling teams for AI patterns: Establish teams dedicated to identifying and sharing successful AI implementations across the organisation.
Focus on stewardship metrics: Measure success by how well systems are maintained for future teams, not just immediate output.
Design for knowledge diffusion: Build opt-in models that leverage social dynamics rather than top-down mandates.
The Cultural Shift Required
The most significant barrier to AI adoption isn't technical but cultural. Organisations must shift from viewing AI as a tool to be deployed to seeing it as a team member to be integrated. This requires the same organisational maturity that DevOps demanded—clear boundaries, shared responsibility, and continuous learning.
As Skelton noted, companies already structured for bounded agency in humans will find the transition to agentic systems significantly more straightforward. The infrastructure for agency isn't a new technology platform but rather the organisational patterns and cultural norms that enable safe, effective delegation of both human and artificial intelligence.
The anniversary of Team Topologies at QCon London serves as a reminder that the most enduring frameworks aren't about technology trends but about human collaboration. As AI becomes increasingly integrated into software development workflows, the principles that made Team Topologies successful—clarity, boundaries, and stewardship—may prove equally essential for the AI era.

Comments
Please log in or register to join the discussion