For the first time in the history of software, the act of writing code is fully telemetered. Every prompt sent to Claude Code or Cursor. Every suggestion accepted. Every suggestion rejected and rewritten. Every iteration. The process — not just the output — is now a stream of structured events.
This is genuinely new. We've had commit history forever, but commit history is the after-the-fact compressed artifact. The actual cognitive work — the rewrites, the dead ends, the prompts that didn't quite land — was always invisible. Now it's a log file.
Every CTO I talk to is starting to ask the obvious questions. Who on the team is using these tools most effectively? Who's getting the best results from their prompts? Who's still writing everything by hand? AI coding tool adoption is already a 2026 corporate initiative for most mid-stage companies. These dashboards are coming whether the engineering org wants them or not.
The danger is treating new data as better data. AI tool telemetry is just another input — and a noisier one than most leaders realize. Prompt count is not productivity. Acceptance rate is not quality. A senior engineer who carefully crafts one good prompt will look less "engaged" than a junior who fires off fifty mediocre ones. I've argued elsewhere that the GTM vs. R&D measurement gap is real, but the answer isn't to flood R&D with vanity metrics dressed up as AI insight.
The interesting use of this data isn't ranking. It's studying what top performers actually do — which is a much harder analytical exercise than "sort by prompts per week descending." That requires actual research, not a Looker dashboard.
My prediction: in 2026, every engineering org of meaningful size will have an AI tool dashboard. In 2027, half of them will quietly walk it back because the metrics it surfaced drove worse behavior, not better. The teams that thrive will be the ones that treat this data as a starting point for conversation, not a leaderboard for performance review. Telemetry is a microscope. Microscopes are useful. They are not, by themselves, a diagnosis.
Sources
Related Essays
Top Performer Analysis: The Real Opportunity in AI Tool Telemetry
The interesting use of AI coding tool data isn't ranking. It's understanding how your best engineers actually work — and helping the rest of the team catch up.
The GTM vs. R&D Measurement Gap
Sales has revenue. Engineering has hand-waving. The asymmetry in how we measure go-to-market versus R&D is a real problem, not a feature.
The Gaming Problem Never Goes Away
Any developer performance metric can be gamed. AI tools just give us new things to measure — and new ways to get it wrong.
Key takeaways
- AI tools turn coding into a logged, telemetered process.
- New data does not equal better measurement.
- AI adoption dashboards are a 2026 inevitability — handle with care.
FAQ
What new data do AI coding tools produce?
Every prompt, every acceptance, every rejection, every iteration. The process of writing code becomes telemetered in a way it never was before.
Is AI tool adoption a useful metric?
Adoption is a leading indicator at best. It tells you who's trying. It doesn't tell you who's shipping better software because of it.