A bizarre new trend called 'tokenmaxxing' has emerged where developers at major tech companies are deliberately wasting AI tokens to inflate usage metrics, while coding AI agent subsidies face an uncertain future.
This week in tech, we're witnessing what might be the shortest-lived trend in recent memory: 'tokenmaxxing.' At companies like Meta, Microsoft, and Salesforce, developers have discovered a peculiar way to game their AI usage metrics—by deliberately burning through tokens and wasting money to inflate their AI consumption numbers.
The Tokenmaxxing Phenomenon
The trend works like this: developers write intentionally inefficient code that generates excessive token usage, then proudly report these inflated numbers to management. Why? Because many organizations have set AI usage as a key performance metric, creating perverse incentives where waste equals success.
One engineer described it as "the corporate equivalent of leaving all the lights on to show we're using electricity." Another compared it to "ordering the most expensive items on the menu just to prove you can afford to eat out."
The Economics of Waste
This behavior isn't just environmentally questionable—it's financially absurd. At current enterprise AI rates, tokenmaxxing can cost companies thousands of dollars per developer per month. Yet the metric-driven culture that spawned this trend shows no signs of abating.
The Subsidy Bubble Bursts
Meanwhile, the golden age of subsidized coding AI agents appears to be ending. Anthropic recently discontinued its enterprise plan subsidies, and Uber burned through its entire 2026 AI token budget in just three months. The message is clear: companies can no longer afford to treat AI tokens as an infinite resource.
Industry-Wide Reckoning
Several other developments signal a broader shift in how companies approach AI:
Cal.com's Controversial Pivot: The open-source Calendly alternative moved significant portions of its codebase to closed repositories, citing AI and security concerns. However, many observers see this as a convenient excuse for a business model change that was likely inevitable.
Vercel's Open Source Move: In contrast to Cal.com, Vercel open-sourced its "agent factories" tool, betting on community-driven development for AI infrastructure.
Linux Kernel's Pragmatic Approach: The Linux kernel maintainers published sensible AI usage guidelines, acknowledging the technology's potential while maintaining their characteristic pragmatism about new tools.
The Future of AI Metrics
The tokenmaxxing trend highlights a fundamental problem in how companies measure AI success. When usage becomes the goal rather than a means to an end, you get exactly what you incentivize: waste.
As more companies implement per-engineer AI budgets and move away from pure usage metrics, we may see the end of tokenmaxxing. But the underlying issue—the challenge of measuring knowledge worker productivity in the age of AI—remains unsolved.
The irony is palpable: in the rush to adopt AI, some companies have created metrics that reward the exact opposite of what AI should accomplish—efficiency and productivity. Tokenmaxxing may be short-lived, but the lessons it teaches about metric design and incentive structures will likely endure far longer than the trend itself.

The Pulse will continue monitoring these developments as the industry grapples with the practical realities of AI adoption beyond the hype cycle.

Comments
Please log in or register to join the discussion