What Developers Do
Use AI tools 78.5%
Use daily 47.1%
Say AI saves time ~90%
Report enhanced productivity >80%
Completed task faster (controlled) 55%
What Developers Think
Distrust AI accuracy 46%
"Highly trust" AI ~3%
Frustrated by "almost right" output 66%
Report longer debugging 45%
Trust "a lot" or "great deal" (DORA) 24%

The Question Shapes the Answer

The same population yields wildly different numbers depending on how you ask

These are not directly comparable surveys — different samples, timing, and audiences. But they illustrate why "adoption rate" claims range from 47% to 97%: question framing is everything. The stricter the definition, the smaller the number.

Trust Levels

Stack Overflow 2025 — trust in AI output accuracy

Productivity Claims by Source

Self-reported gains vs. one controlled experiment

Only the GitHub experiment (55% faster, n=95) is a controlled study. All others are self-reported survey data.

The One Controlled Experiment

GitHub 2022 — 95 professional developers, randomized controlled trial

With Copilot
Median completion time 1h 11m
Task completion rate 78%
Speed improvement 55%
Without Copilot
Median completion time 2h 41m
Task completion rate 70%
Speed improvement

This is the only published controlled experiment with professional developers. The 55% speed gain is real but narrow: one task type, one tool, one sample. Self-reported surveys (80–90% "productivity improvement") are not comparable — they measure perception, not performance.

Top Frustrations

Stack Overflow 2025 — what goes wrong with AI output

The Stability Problem

DORA 2025 — AI's contradictory impact on software delivery

AI increases throughput and product performance but has a negative relationship with delivery stability. More code, shipped faster, with more failures — the verification debt problem in one chart.

Verification Debt

AI doesn't eliminate work — it shifts it. Developers spend less time writing code and more time reviewing, debugging, and verifying AI output. When "almost right" is the norm (66% of developers report this), the bottleneck moves from generation to verification. DORA's finding that AI hurts delivery stability while boosting throughput is the organizational version of this: more code ships faster, but without proportional safety nets, failure rates climb.

SO 2025 66% "almost right but not quite" is top frustration
SO 2025 45% report longer debugging time with AI
DORA 2025 Negative AI's relationship with delivery stability