A new industry trend dubbed "tokenmaxxing," where developers compete to use the largest AI processing budgets, is leading to significant inefficiencies and hidden costs in software engineering. Data from multiple developer analytics firms shows that while AI coding tools like Claude Code and Cursor dramatically increase the volume of code written, they also cause a sharp rise in code that must be later revised or deleted.

This phenomenon is creating a false sense of productivity, as engineering managers see high initial code acceptance rates but miss the subsequent churn. The findings challenge the Silicon Valley culture of treating large token budgets—the amount of AI processing power a developer can consume—as a badge of honour.

Metrics Reveal Hidden Inefficiency

Analysis from firms including Waydev, GitClear, Faros AI, and Jellyfish provides a consistent, data-driven picture. Alex Circei, CEO of Waydev, which works with over 10,000 engineers, reports that while AI tools achieve code acceptance rates of 80-90%, subsequent revisions drive the real-world, lasting acceptance rate down to between 10% and 30%.

"Engineering managers are seeing code acceptance rates of 80% to 90%—meaning the share of AI-generated code that developers approve and keep—but they’re missing the churn," Circei told TechCrunch. His firm recently overhauled its platform to track the metadata from AI agents, offering new insights into code quality and cost.

Industry-Wide Data Points to a Problem

The scale of the issue is highlighted in several recent industry reports. A January study by GitClear found that regular AI users averaged 9.4 times higher code churn than their non-AI counterparts, more than double any productivity gains.

Faros AI's March 2026 report, drawing on two years of data, found that code churn—measured by lines deleted versus lines added—had increased by 861% under high AI adoption.

Perhaps most starkly, data from Jellyfish on 7,548 engineers in Q1 2026 showed that engineers with the largest token budgets produced the most pull requests, but at a poor return. They achieved two times the throughput at ten times the token cost, indicating the tools generate volume, not necessarily value.

Seniority Gap and Lasting Change

A common thread in the data is a disparity between junior and senior engineers. Junior developers are found to accept far more AI-generated code initially, leading to a larger amount of necessary rewriting later, thereby accruing technical debt.

Despite the growing awareness of these inefficiencies, a retreat from AI tools is not anticipated. The shift is seen as fundamental. "This is a new era of software development, and you have to adapt, and you are forced to adapt as a company," Circei stated. "It’s not like it will be a cycle that will pass."

The market is responding to the need for better oversight. Last year, Atlassian acquired engineering intelligence startup DX for $1 billion to help its customers understand the return on investment from AI coding agents, signalling major corporate recognition of the productivity measurement challenge.