Historical Pattern 20 yrs "Always 20 years away" — the recurring AGI prediction pattern from 1956 to 2020
AI Researcher Surveys 2061→2038 Median HLMI prediction shifted ~23 years closer between 2016 and 2025
Metaculus Community 2040→2027 Community prediction median collapsed ~13 years between 2022 and 2025
Industry Leaders (2025) 2-5 yrs Most CEOs now predict AGI within single-digit years, not decades

The Perpetual Horizon

Major AGI predictions over the decades — when researchers said it would arrive, and when they said it

The Optimistic Dawn
1956
Herbert Simon

"Within ten years a digital computer will be the world's chess champion" and machines would be capable of "any work a man can do."

Target: ~1967
1965
Herbert Simon

"Machines will be capable, within twenty years, of doing any work a man can do."

Target: ~1985
1967
Marvin Minsky

"Within a generation... the problem of creating 'artificial intelligence' will substantially be solved."

Target: ~1990
1970
Marvin Minsky

"In from three to eight years we will have a machine with the general intelligence of an average human being."

Target: ~1978
AI Winter & Caution
1988
Hans Moravec

Predicted human-level AI by 2010 based on computational trends in Mind Children.

Target: ~2010
1993
Vernor Vinge

"Within thirty years, we will have the technological means to create superhuman intelligence."

Target: ~2023
2005
Ray Kurzweil

The Singularity Is Near: Human-level AI by 2029, technological singularity by 2045.

Target: 2029
The Deep Learning Explosion
2016
Grace et al. AI Researcher Survey (n=352)

Median estimate for HLMI (50% probability): 2061. Asian researchers predicted ~2046; North American researchers ~2064.

Median: 2061
2022
Grace et al. AI Researcher Survey (n=738)

Median shifted to 2059. But the question framing matters — "HLMI" yielded 2059, while "Full Automation of Labor (FAOL)" yielded 2164.

Median: 2059
The Great Compression
2023
Grace et al. AI Researcher Survey (n=2,778)

Massive shift: median HLMI prediction pulled forward to 2049. 10% of researchers said by 2027; 25% said by 2030. Full Automation of Labor (FAOL) median pulled from 2116 to ~2086. The sharpest revision in the survey's history.

Median: 2049
2023
Sam Altman (OpenAI CEO)

"AGI is going to get developed basically regardless... I believe the best thing we can do is try to steer it well." Gave no firm date but implied near-term.

2024
Demis Hassabis (Google DeepMind CEO)

"AGI could arrive within the next decade." Nobel Prize acceptance speech cited rapid progress in protein folding and reasoning.

Target: by ~2034
2024
Dario Amodei (Anthropic CEO)

"Powerful AI" could arrive by 2026. Published Machines of Loving Grace essay on potential positive impacts. Described AI that could compress a century of biological research into 5-10 years.

Target: 2026–2027
2024
Jensen Huang (NVIDIA CEO)

"If I gave an AI... every single test you can imagine, you make a list of tests and put it in front of the computer science industry, and I believe in five years' time, we'll do well on every single one."

Target: ~2029
2025
Ray Kurzweil (Google, author)

Doubled down on 2029 prediction from 2005. "We're basically on track." Published The Singularity Is Nearer (2024) defending the timeline.

Target: 2029 (unchanged)
2025
Sam Altman (OpenAI CEO)

"We are now confident we know how to build AGI as we have traditionally understood it." January 2025 blog post.

Target: imminent
2025
Elon Musk (xAI)

"AI will probably be smarter than any single human around the end of next year" (2026). Later said "smarter than all humans combined" by 2028-2029.

Target: 2026–2029
2024
Geoffrey Hinton (Turing Award, Nobel Prize 2024)

Initially thought AGI was decades away. After leaving Google (2023), revised to "5-20 years." By 2024: "could happen within 5 years." Cited the speed of progress as the reason for his shift.

Target: 2028–2034
2024
Mustafa Suleyman (CEO, Microsoft AI)

Coined "ACI" (Artificial Capable Intelligence) as a more practical goalpost than AGI. "We'll have ACI within 2-3 years." Author of The Coming Wave (2023).

Target: 2026–2027
2011
Shane Legg (DeepMind co-founder)

Predicted 50% probability of human-level AI by 2028 — back in 2011, when most peers said 2050+. One of the most accurate early predictions, made 15 years ahead.

Target: 2028 (from 2011)
2025
Estimated consensus (extrapolated from smaller polls and Metaculus)

Multiple smaller surveys and prediction markets suggest the researcher median has continued compressing toward ~2035-2040. No full Grace et al. update confirmed for 2025, but the trend is unmistakable.

Estimated median: ~2035-2040

The Collapsing Timeline

Median predicted year for Human-Level Machine Intelligence (HLMI) — AI researcher surveys

Each data point represents when AI researchers, surveyed that year, said there was a 50% chance of achieving HLMI. The gap between "when asked" and "predicted year" shrank from 45 years to under 15.

The Other Side

The Skeptics

Not everyone agrees the timelines have collapsed. Several prominent researchers caution against extrapolating recent progress:

Yann LeCun (Meta, Turing Award)

"We are not on the path to AGI." Has consistently argued that LLMs lack world models and cannot achieve true understanding. Predicts AGI requires fundamental new architectures not yet invented.

Gary Marcus (NYU, cognitive scientist)

"Deep learning is hitting a wall." Argues LLMs are sophisticated pattern matchers, not reasoning systems. Points to persistent failures in compositional reasoning and reliability.

Rodney Brooks (MIT, Roomba creator)

Predicts AGI no earlier than 2300. Maintains a public "predictions scorecard" tracking AI hype vs reality. Has been consistently skeptical of AGI timelines since the 1990s.

Francois Chollet (Google, Keras creator)

Created the ARC benchmark specifically to test genuine reasoning vs pattern matching. Argues current AI is "skill" not "intelligence" — good at specific tasks, unable to generalize.

The core skeptic argument: benchmarks are being saturated without the underlying problem being solved. Models get better at tests designed to measure intelligence without becoming intelligent. The "moving goalpost" works both ways — what counts as AGI keeps being redefined downward to match what current systems can do.

When Industry Leaders Say AGI Will Arrive

Predictions from CEOs and prominent researchers, as of their most recent public statements

Range bars show the span of each prediction. Note that definitions of "AGI" vary significantly between predictors — some mean superhuman performance on all cognitive tasks, others mean economically valuable autonomous work.

Key Caveat

The Definition Problem

Every prediction above carries an asterisk: nobody agrees on what "AGI" means.

DefinitionWho Uses ItImplied Timeline
"AI that can do any intellectual task a human can"Classical definition (Minsky, McCarthy)Decades away or never
"AI that can do economically valuable work autonomously"OpenAI's internal definition2025–2028
"AI that passes every benchmark we can think of"Jensen Huang's framing~2029
"AI that can fully automate all labor"Grace et al. FAOL metric2060+
"AI that can do novel science at PhD level"Amodei's practical test2026–2027

The timelines haven't just compressed — the definition has softened. When someone in 1967 said "AGI in 20 years," they meant something more ambitious than what many leaders today mean by "AGI in 3 years." The goalposts moved forward AND the goalposts got wider.

Prediction Markets: The Wisdom of Crowds

Metaculus community median for "When will AGI be achieved?" — sampled over time

Metaculus aggregates forecasts from thousands of predictors. The dramatic 2023-2024 compression mirrors the shift in expert surveys — ChatGPT and GPT-4 moved the Overton window on what seemed possible.