Tracking who uses AI (and who doesn't). What leaderboards and usage scores cost you.
By May 2026, most large companies were tracking AI usage at the individual employee level. Some built internal leaderboards ranking staff by how often they used the tools. The logic seemed obvious: measure adoption, reward the leaders, nudge the laggards. The results were not obvious.
The mechanic is simple. Your AI vendor can tell you exactly who prompted what, when, and how many tokens they burned. Microsoft, Google, Salesforce, and OpenAI all offer dashboards that surface usage by person, team, or function. Some platforms now auto-generate weekly scorecards. A few went further and created internal leaderboards, displaying top users and highlighting outliers.
The stated goal was accountability. If you paid for the seats, you wanted to know they were being used. If one team saved six hours a week and another saved none, you wanted to understand why.
Most Fortune 500 companies are now tracking workplace AI usage at the group, role, or individual level, according to reporting in early May. Managers save more than twice as much time as individual contributors using AI tools (7.2 hours versus 3.4 hours per week), a January survey of over 1,000 small business employees found. The disparity raised questions about training, access, and whether the tools were designed for the wrong work.
But the bigger problem wasn't unequal adoption. It was what happened when you made the metrics visible.
What the surveys actually show
Nearly half of small business workers (45 percent) worry that adopting too much AI could harm their company's reputation, and another 39 percent question whether their business even needs as much AI as industry trends suggest. Perhaps most telling: 30 percent admit they act more optimistic about AI than they feel, showing a disconnect between public enthusiasm and private uncertainty.
That gap between performance and sentiment shows up in executive surveys too. Around 90 percent of nearly 6,000 interviewed CEOs and top executives said that AI has had no impact on productivity or employment at their business, according to a February study. Around 70 percent of the firms reported actively using AI, meaning the vast majority are admitting that adopting the tech hasn't budged the needle for them yet.
Cornell research published in 2024 found that organizations using AI to monitor employees' behavior and productivity can expect them to complain more, be less productive, and want to quit more, unless the system is positioned as supporting development rather than policing behavior. In one study, more than 30 percent of participants criticized AI surveillance compared to about 7 percent who were critical of human monitoring.
The leaderboard reflex
Leaderboards work in sales because the goal is unambiguous and the scoreboard is the job. Closed deals, booked revenue, quota attainment. Everyone signed up knowing the number would be public.
Knowledge work is different. A customer success manager who spends an hour on a call preventing churn may use zero AI that day. A finance lead who manually reconciles a suspicious line item instead of trusting the model may produce better work, not worse. A developer who writes code from scratch because the generated snippet introduced a bug three sprints ago is making a correct decision.
When you rank employees by AI usage without accounting for context, you reward activity instead of outcome. People start using the tool to be seen using it. Prompt volume becomes performative. The spreadsheet gets exported to ChatGPT and back again, not because it helps but because it registers.
In some workplaces, AI use is starting to feel less like a tool and more like a contest to prove employee productivity, with internal systems categorizing employees on leaderboards by how much they use AI, one report noted in early May.
What actually predicts better outcomes
The businesses that report meaningful results from AI share three patterns. None involve leaderboards.
First, they picked a specific workflow and optimized it with a small team before rolling it out. Senior leadership picks the spots for focused AI investments, looking for a few key workflows or business processes where payoffs from AI can be big, according to PwC's 2026 predictions. Top-down focus beats bottom-up crowdsourcing.
Second, they tracked outcomes, not activity. Time saved is fine if someone validates the time was worth saving. Better metrics: errors prevented, customers retained, margin improved, hiring speed without quality loss.
Third, they let humans decide when not to use it. Research from BetterUp Labs and Stanford found that 41 percent of workers have encountered AI-generated output requiring nearly two hours of rework per instance, creating downstream productivity, trust, and collaboration issues. The employees who caught those errors early often did so by ignoring the AI.
A smaller, better score
If you want to measure something, measure whether the thing you bought AI to fix actually got fixed. Track the metric you cared about before the tool arrived. Median time to close the books. First-response time on support tickets. Customer satisfaction after onboarding. Rework率 on design projects.
Then ask the team quarterly: is the tool helping, neutral, or getting in the way? If adoption is low, find out why before assuming the staff needs nudging. Sometimes the workflow hasn't been redesigned. Sometimes the prompt library is missing. Sometimes the AI just isn't good at that job yet.
You can still pull usage reports. They tell you whether anyone is trying the tool, which is useful in month one. But once adoption passes 50 percent, the question stops being "who's using it" and starts being "is it working." The answer to that question never appears on a leaderboard.
Related: When your team won't touch the AI tools you bought • AI for small business: a realistic 90-day plan