Always 20 years away.
Until it wasn't.
For decades, AI researchers predicted AGI was perpetually on the horizon. Then the timelines collapsed. Here's how the goalposts moved — and what the current forecasts say.
The Perpetual Horizon
Major AGI predictions over the decades — when researchers said it would arrive, and when they said it
"Within ten years a digital computer will be the world's chess champion" and machines would be capable of "any work a man can do."
Target: ~1967"Machines will be capable, within twenty years, of doing any work a man can do."
Target: ~1985"Within a generation... the problem of creating 'artificial intelligence' will substantially be solved."
Target: ~1990"In from three to eight years we will have a machine with the general intelligence of an average human being."
Target: ~1978Predicted human-level AI by 2010 based on computational trends in Mind Children.
Target: ~2010"Within thirty years, we will have the technological means to create superhuman intelligence."
Target: ~2023The Singularity Is Near: Human-level AI by 2029, technological singularity by 2045.
Target: 2029Median estimate for HLMI (50% probability): 2061. Asian researchers predicted ~2046; North American researchers ~2064.
Median: 2061Median shifted to 2059. But the question framing matters — "HLMI" yielded 2059, while "Full Automation of Labor (FAOL)" yielded 2164.
Median: 2059Massive shift: median HLMI prediction pulled forward to 2049. 10% of researchers said by 2027; 25% said by 2030. Full Automation of Labor (FAOL) median pulled from 2116 to ~2086. The sharpest revision in the survey's history.
Median: 2049"AGI is going to get developed basically regardless... I believe the best thing we can do is try to steer it well." Gave no firm date but implied near-term.
"AGI could arrive within the next decade." Nobel Prize acceptance speech cited rapid progress in protein folding and reasoning.
Target: by ~2034"Powerful AI" could arrive by 2026. Published Machines of Loving Grace essay on potential positive impacts. Described AI that could compress a century of biological research into 5-10 years.
Target: 2026–2027"If I gave an AI... every single test you can imagine, you make a list of tests and put it in front of the computer science industry, and I believe in five years' time, we'll do well on every single one."
Target: ~2029Doubled down on 2029 prediction from 2005. "We're basically on track." Published The Singularity Is Nearer (2024) defending the timeline.
Target: 2029 (unchanged)"We are now confident we know how to build AGI as we have traditionally understood it." January 2025 blog post.
Target: imminent"AI will probably be smarter than any single human around the end of next year" (2026). Later said "smarter than all humans combined" by 2028-2029.
Target: 2026–2029Initially thought AGI was decades away. After leaving Google (2023), revised to "5-20 years." By 2024: "could happen within 5 years." Cited the speed of progress as the reason for his shift.
Target: 2028–2034Coined "ACI" (Artificial Capable Intelligence) as a more practical goalpost than AGI. "We'll have ACI within 2-3 years." Author of The Coming Wave (2023).
Target: 2026–2027Predicted 50% probability of human-level AI by 2028 — back in 2011, when most peers said 2050+. One of the most accurate early predictions, made 15 years ahead.
Target: 2028 (from 2011)Multiple smaller surveys and prediction markets suggest the researcher median has continued compressing toward ~2035-2040. No full Grace et al. update confirmed for 2025, but the trend is unmistakable.
Estimated median: ~2035-2040The Collapsing Timeline
Median predicted year for Human-Level Machine Intelligence (HLMI) — AI researcher surveys
Each data point represents when AI researchers, surveyed that year, said there was a 50% chance of achieving HLMI. The gap between "when asked" and "predicted year" shrank from 45 years to under 15.
The Skeptics
Not everyone agrees the timelines have collapsed. Several prominent researchers caution against extrapolating recent progress:
"We are not on the path to AGI." Has consistently argued that LLMs lack world models and cannot achieve true understanding. Predicts AGI requires fundamental new architectures not yet invented.
"Deep learning is hitting a wall." Argues LLMs are sophisticated pattern matchers, not reasoning systems. Points to persistent failures in compositional reasoning and reliability.
Predicts AGI no earlier than 2300. Maintains a public "predictions scorecard" tracking AI hype vs reality. Has been consistently skeptical of AGI timelines since the 1990s.
Created the ARC benchmark specifically to test genuine reasoning vs pattern matching. Argues current AI is "skill" not "intelligence" — good at specific tasks, unable to generalize.
The core skeptic argument: benchmarks are being saturated without the underlying problem being solved. Models get better at tests designed to measure intelligence without becoming intelligent. The "moving goalpost" works both ways — what counts as AGI keeps being redefined downward to match what current systems can do.
When Industry Leaders Say AGI Will Arrive
Predictions from CEOs and prominent researchers, as of their most recent public statements
Range bars show the span of each prediction. Note that definitions of "AGI" vary significantly between predictors — some mean superhuman performance on all cognitive tasks, others mean economically valuable autonomous work.
The Definition Problem
Every prediction above carries an asterisk: nobody agrees on what "AGI" means.
| Definition | Who Uses It | Implied Timeline |
|---|---|---|
| "AI that can do any intellectual task a human can" | Classical definition (Minsky, McCarthy) | Decades away or never |
| "AI that can do economically valuable work autonomously" | OpenAI's internal definition | 2025–2028 |
| "AI that passes every benchmark we can think of" | Jensen Huang's framing | ~2029 |
| "AI that can fully automate all labor" | Grace et al. FAOL metric | 2060+ |
| "AI that can do novel science at PhD level" | Amodei's practical test | 2026–2027 |
The timelines haven't just compressed — the definition has softened. When someone in 1967 said "AGI in 20 years," they meant something more ambitious than what many leaders today mean by "AGI in 3 years." The goalposts moved forward AND the goalposts got wider.
Prediction Markets: The Wisdom of Crowds
Metaculus community median for "When will AGI be achieved?" — sampled over time
Metaculus aggregates forecasts from thousands of predictors. The dramatic 2023-2024 compression mirrors the shift in expert surveys — ChatGPT and GPT-4 moved the Overton window on what seemed possible.