Everybody uses it.
Almost nobody trusts it.
The central tension of AI-assisted coding: developers report strong productivity gains while simultaneously distrusting the very tools they rely on daily. The data tells two contradictory stories.
The Question Shapes the Answer
The same population yields wildly different numbers depending on how you ask
These are not directly comparable surveys — different samples, timing, and audiences. But they illustrate why "adoption rate" claims range from 47% to 97%: question framing is everything. The stricter the definition, the smaller the number.
Trust Levels
Stack Overflow 2025 — trust in AI output accuracy
Productivity Claims by Source
Self-reported gains vs. one controlled experiment
Only the GitHub experiment (55% faster, n=95) is a controlled study. All others are self-reported survey data.
The One Controlled Experiment
GitHub 2022 — 95 professional developers, randomized controlled trial
This is the only published controlled experiment with professional developers. The 55% speed gain is real but narrow: one task type, one tool, one sample. Self-reported surveys (80–90% "productivity improvement") are not comparable — they measure perception, not performance.
Top Frustrations
Stack Overflow 2025 — what goes wrong with AI output
The Stability Problem
DORA 2025 — AI's contradictory impact on software delivery
AI increases throughput and product performance but has a negative relationship with delivery stability. More code, shipped faster, with more failures — the verification debt problem in one chart.
Verification Debt
AI doesn't eliminate work — it shifts it. Developers spend less time writing code and more time reviewing, debugging, and verifying AI output. When "almost right" is the norm (66% of developers report this), the bottleneck moves from generation to verification. DORA's finding that AI hurts delivery stability while boosting throughput is the organizational version of this: more code ships faster, but without proportional safety nets, failure rates climb.