Tokenmaxxing is here, and it's giving HR a strategy problem
By Author
Alexei Dunaway
Reading Time
4
mins
Date
May 6, 2026
Share
Table of Content

Tokenmaxxing is here, and it's giving HR a strategy problem

Employers started measuring AI usage. Employees started performing it. That sequence, more than any particular tool or trend, is the clearest sign that AI adoption has moved ahead of AI strategy in most organizations.

By now, you've probably heard of "tokenmaxxing": maximizing visible AI usage to signal productivity and commitment, whether or not it actually drives either. Some companies have gone further, setting AI usage as a criterion for promotions. The result is predictable. Employees compete to showcase engagement with the technology, and the question of whether that engagement translates into actual performance becomes harder to ask, let alone answer.

What is tokenmaxxing, and why is it spreading in the workplace?

Tokenmaxxing is performative AI adoption, optimizing for the appearance of AI use rather than its outcomes. It spreads when organizations measure inputs rather than results, when AI usage becomes a proxy for capability, and when the cultural pressure to be seen as AI-forward outpaces the clarity about what AI-forward actually means.

The dynamic creates a compounding problem. Employees spend time generating AI output, checking it for errors, reworking it when errors are found, and in some cases adding errors back in to make it read as human. Tools like Sinceerly, a Chrome extension that inserts typos into written content to disguise its AI origins, sit at the extreme end of this behavior. But the underlying pattern, AI creating additional workflow steps rather than reducing them, is more widespread than the novelty of that example suggests.

How does performative AI erode trust inside organizations?

The trust erosion runs in multiple directions at once. Employees look down on leaders who rely on AI-generated communication. Employers look down on candidates who use AI to draft applications. Both groups are simultaneously using the same tools and concealing that use, which produces an environment where everyone suspects everyone and no one says so directly.

The difficulty is that distinguishing between AI-generated and human-written content is genuinely hard. That ambiguity creates a culture of assumption: people leap to conclusions about whether something was AI-generated and make judgments based on those conclusions. The result is a workplace where AI is both everywhere and nowhere acknowledged, and where the distrust compounds with each interaction.

For managers and leaders specifically, the reputational stakes are real. Leaders that rely too much on AI-generated communication are dealing measurable damage to workplace trust. 

AI strategy should come before AI tools

The deeper issue underneath tokenmaxxing is that most organizations reached for AI tools before they had clarity on what they were trying to achieve. Adoption followed availability, and measurement followed adoption, which meant performance frameworks were retrofitted onto behavior that was already happening.

Starting with strategy means asking what specific outcomes the organization is trying to improve, which roles and workflows have the most to gain from AI support, and what good looks like before deploying a tool to get there. Without those answers, usage metrics fill the vacuum. And usage metrics, as tokenmaxxing illustrates, measure activity rather than impact. For a deeper look at how to build the measurement infrastructure first, our AI ROI measurement framework lays out a practical starting point. And if the question is where AI actually earns its keep, explicit use cases by function offer a more grounded place to anchor the conversation.

HR as a strategic partner in rethinking performance assessment

Performance frameworks built around AI usage as a metric are, at this stage, measuring the wrong thing. Usage is an input. What matters is whether the work is better, faster, or more impactful as a result. HR is the function best positioned to make that distinction, and the organizations where HR operates as a strategic partner rather than a policy enforcer are the ones most likely to get this right.

That means redesigning performance criteria around outcomes rather than tools, building manager capability to distinguish between genuine AI fluency and performative adoption, and creating the psychological safety for employees to use AI honestly without fear that disclosure will be used against them. The alternative, a culture where AI use is simultaneously required and stigmatized, is already showing its costs.

See Pascal in action.

Get a live demo of Pascal, your 24/7 AI coach inside Slack and Teams, helping teams set real goals, reflect on work, and grow more effectively.

Book a demo