As companies scramble to adopt AI, many are falling into the trap of measuring token usage rather than actual value. One engineering leader shares their team's pragmatic approach to AI adoption that prioritizes understanding, maintainability, and human needs over vanity metrics.
In the rush to demonstrate AI adoption, some companies have embraced 'tokenmaxxing' - the practice of tracking and rewarding engineers based on the number of AI tokens they consume. This approach represents the latest in a long line of management metrics that ignore the fundamental truth: any metric that can be gamed will be gamed.
The author begins with a telling anecdote from their early career at a law firm where a new executive attempted to improve productivity by timing paralegals with a stopwatch. As you might expect, this approach failed spectacularly, as employees naturally altered their behavior when being observed. Today's tokenmaxxing initiatives suffer from the same fundamental misunderstanding of how metrics drive behavior rather than actual productivity.
What Tokenmaxxing Actually Achieves
When companies implement tokenmaxxing programs, they typically create leaderboards showing which engineers use the most AI tokens. The predictable result is that engineers quickly find ways to game the system:
- Creating loops that waste tokens to climb the leaderboard
- Using just enough tokens to appear engaged without providing clear justification
- Prioritizing token count over actual utility
This behavior mirrors historical examples of metrics being gamed, such as:
- Developers writing excessive amounts of trivial code to hit line count metrics
- Customer service representatives handling many brief calls rather than resolving complex issues
- Students optimizing for test scores rather than actual learning
The fundamental issue remains the same: when metrics become the goal rather than a measure of progress toward actual objectives, they lose their meaning and often become counterproductive.
A Better Approach: The Author's Team AI Policy
Rather than focusing on token consumption, the author's team developed a policy centered on four core principles:
1. No AI Mandate
The policy explicitly states there is no requirement to use AI tools. Engineers won't be evaluated based on their AI usage. This approach recognizes that:
- AI tools are still evolving rapidly, with significant differences in quality between tools released just months apart
- Different engineers have different workflows and preferences
- Forcing tool adoption often backfires, creating resentment without improving outcomes
The author notes the contradiction in AI boosterism: simultaneously claiming that engineers must adopt AI immediately or be left behind, while also claiming that everything known today will be obsolete in six months. If knowledge becomes obsolete so quickly, why the rush to adopt current tools rather than waiting for more mature solutions?
2. Understanding AI-Generated Code
Any code produced with AI assistance remains the engineer's responsibility. This means:
- Engineers must understand what AI-generated code does
- The code must conform to existing patterns and standards
- Engineers can't shift the burden of understanding to code reviewers
This principle acknowledges that AI tools can produce code that works but may be difficult to understand, modify, or debug later. In a mature codebase with accumulated technical debt, this understanding becomes even more critical.
The author specifically rejects the AI maximalist approach of accepting whatever code an AI generates, arguing that in established codebases, human maintainability should trump machine convenience.
3. Operability Without AI
Engineers must be able to perform their jobs if their AI tools disappear. This requirement:
- Ensures engineers develop fundamental skills rather than becoming dependent on AI crutches
- Protects the organization against potential service disruptions or changes in AI offerings
- Maintains the ability to understand and maintain the codebase
This principle is particularly important given the current state of AI tools, which are still evolving rapidly with significant changes in capabilities, interfaces, and reliability.
4. Focus on People
Ultimately, the policy emphasizes that AI should serve people, not the reverse. This means:
- Prioritizing customer needs and team well-being over AI adoption for its own sake
- Recognizing the tension between delivering features quickly and maintaining sustainable development practices
- Avoiding the creation of technical debt that harms long-term productivity
The author notes that while AI can help deliver features faster, this shouldn't come at the cost of creating an unmaintainable codebase that frustrates the development team.
Special Considerations for Junior Engineers
The policy includes specific guidance for junior engineers, who face unique challenges in an AI-augmented development environment:
- Learning happens through struggle and experience, not through outsourcing coding tasks to AI
- Junior engineers need opportunities to develop fundamental skills through practice
- Over-reliance on AI tools can hinder long-term career development
The author argues that the current generation of AI tools is pulling up the ladder on junior engineers by automating away the kind of repetitive tasks that have traditionally helped newcomers learn codebases and development practices.
Context Matters: Why This Policy Works for This Team
The author acknowledges that their policy may not be suitable for all teams. Their specific context includes:
- An established ten-year-old codebase with accumulated technical debt
- A regulatory environment that requires stability and predictability
- Long-term customers who value reliability over rapid feature addition
In contrast, a greenfield startup with a different team composition and priorities might adopt a more AI-maximalist approach. The key is having a coherent philosophy that aligns with the team's actual goals and constraints.
The Alternative to Tokenmaxxing
Tokenmaxxing represents a failure of leadership - an attempt to appear innovative while implementing the same tired metric-driven approaches that have failed for decades. The alternative requires:
- Understanding what actually drives value in your specific context
- Developing metrics that align with those value drivers
- Trusting engineers to make appropriate tool choices
- Focusing on outcomes rather than outputs
As the author notes, "I care about people, not tokens." This simple statement captures the essence of a human-centered approach to AI adoption in software development.
For teams looking to develop their own AI policies, the key questions to consider include:
- What problem are we trying to solve with AI?
- How will we measure success beyond token consumption?
- What safeguards do we need to maintain code quality and engineer skills?
- How does AI adoption affect our long-term maintainability and team sustainability?
The future of AI in software development will be determined not by which teams use the most tokens, but by which teams develop thoughtful approaches that balance innovation with practicality and human needs with technological capabilities.
Comments
Please log in or register to join the discussion