VeltrixVeltrix.
← All articles
11 / 62March 15, 2026

The AI Timeline

When does it all change? The honest answer about AGI, timelines, and what to believe.

Before comparing predictions, you need to understand that every person making one is using a different definition of what they're predicting. "AGI" is not a technical standard — it's a contested concept that each major figure has defined to reflect their own goals, fears, or research agenda. The disagreement isn't just about when. It's about what.

Sam Altman
CEO, OpenAI
"A highly autonomous system that outperforms humans at most economically valuable work."
Microsoft's contract with OpenAI defines AGI as when an AI system can generate $100 billion in profits. Commercial, specific, measurable.ALT
Dario Amodei
CEO, Anthropic
"AI broadly better than all humans at almost all things."
Prefers "powerful AI" to AGI. Thinks of it as "a country of geniuses in a data centre" — a civilisation-level intellectual resource.AMO
Demis Hassabis
CEO, Google DeepMind
"Cross-domain brilliance" — the ability to discover new science.
Sets a much higher bar: can it come up with something equivalent to General Relativity? Can it discover something no human has discovered? Nobel-Prize-level originality.HAS
Yann LeCun
Chief AI Scientist, Meta
Rejects "AGI" as a concept entirely.
Argues human intelligence is domain-specific, not "general." Wants to retire the term. Current LLMs cannot "understand the world" — and no amount of scaling will change this without new architectures.LEC
Geoffrey Hinton
"Godfather of AI" — Independent
"As good as humans at nearly all cognitive tasks."
The broadest and least specific definition among the major figures. His concern is less about when it arrives and more about what happens after it does.HIN
The structural reason experts disagree

When Altman says "we know how to build AGI," he means by his commercial definition. When Hassabis says "2030 is a 50% chance," he means by his scientific originality definition. When LeCun says "not for decades," he means by his understanding-based definition. They are not disagreeing about the same thing. They are describing different destination points on the same journey — and they happen to be riding different vehicles.

So what does this mean?

Every "AGI by [year]" headline is meaningless without asking: whose definition? Altman's commercial milestone and Hassabis's scientific discovery bar are completely different targets.

If you're reading AI timelines, the first question isn't "when?" — it's "what, exactly, are they predicting?" The definition determines the answer. Until the industry agrees on what AGI actually means, every prediction is measuring a different finish line.

Five of the most influential people in AI have published or stated their timeline predictions. Their estimates span from "now" to "decades away." The spread is the story.

Sam Altman
CEO, OpenAI — "The Optimist"
NOW → 2035
"We are now confident we know how to build AGI as we have traditionally understood it... We are beginning to turn our aim beyond that, to superintelligence in the true sense of the word."ALT
"2025 has seen the arrival of agents that can do real cognitive work... 2026 will likely see systems that can figure out novel insights. 2027 may see robots that can do tasks in the real world."GS
Notable: Altman acknowledged that "AGI has become a very sloppy term." OpenAI and Microsoft's contract defines AGI financially — as achieving $100B profits. Critics note this definition makes AGI primarily a commercial milestone, not a scientific one.
Dario Amodei
CEO, Anthropic — "The Urgent Optimist"
2026–2027
"By 2026 or 2027, AI systems broadly better than all humans at almost all things." "I'm more confident than I've ever been that we're close to powerful capabilities — in the next 2–3 years."AMO
"Think of it as a country of geniuses in a data centre" — not AGI as a single system but as a collective superintelligence resource that could compress decades of scientific progress into a few years.
Notable: Amodei is simultaneously the most bullish on near-term timelines and runs one of the most safety-focused AI labs. His 2023 essay "Machines of Loving Grace" outlines both the extraordinary potential (curing cancer, eliminating poverty) and the extraordinary risks. The tension is intentional.
Demis Hassabis
CEO, Google DeepMind — "The Scientist"
2030–2035
"My estimate is a 50% chance in the next five years — so by 2030, let's say... There are still two or three big innovations needed from here until we get to AGI."HAS
"AGI, probably the most transformative moment in human history, is on the horizon."
Notable: Hassabis won the 2024 Nobel Prize in Chemistry for AlphaFold — an AI system that predicted protein structures, solving a 50-year biological challenge. His bar for AGI (discovery of new science) has already been partially met by his own work.
Yann LeCun
Chief AI Scientist, Meta — "The Sceptic"
DECADES+
"If someone claims AGI is just around the corner, do not believe them." "LLMs will never reach human-level intelligence." Current AI lacks world models, physical intuition, and common sense — prerequisites that scaling cannot provide.LEC
"There is no question AI will reach and surpass human intelligence in all domains" — but it will require entirely new architectures, not incremental improvements to today's transformers.
Notable: LeCun is a Turing Award winner (2018) and arguably the most credentialled sceptic in the room. He predicted in 2022 that no LLM would understand a specific physics problem — and was proven wrong by GPT-4 within a year. He has not retracted his broader architectural critique.

Geoffrey Hinton, often called the "godfather of deep learning," occupies a unique position: he left his role at Google in 2023 specifically to speak freely about AI risks. He estimates AGI in 5–20 years, down from 30–50 years just a decade ago. His primary concern is not the arrival date. It is what happens after.HIN

Timeline comparison — how expert predictions span
20252030203520402045+
Altman (OpenAI)AGI now → Superintelligence ~2035
Amodei (Anthropic)Powerful AI: 2026–2027
Hassabis (DeepMind)50% probability by 2030
Hinton (Independent)5–20 years = 2031–2046
LeCun (Meta)Decades away — needs new architectures
So what does this mean?

The disagreement itself is the data point. The most optimistic builder says "now." The most cautious says "decades." Everyone else clusters between 2027 and 2035. That range is remarkably narrow for the most consequential technology question of the century.

If even the sceptics acknowledge AI will eventually surpass humans in every domain — and they do — then the debate isn't about whether to prepare. It's about how quickly. Plan for the fast end of the range and you'll never be caught off guard.

Beyond individual expert opinion, prediction markets and crowd-forecasting platforms aggregate thousands of probabilistic bets into consensus estimates. These are people with real money or reputation on the line. They're not necessarily right — but they synthesise more information than any single expert view.

Forecasting platforms

Metaculus — 10,000+ Forecasters
2031
Median forecast for "the first general AI system" — requiring hard Turing test, general robotics capability, and 90% accuracy on academic benchmarks. Median for "weakly general AI": 2027.META
Manifold Markets — Prediction Market
47%
Probability that AGI arrives before 2028, using a subjective "can perform any intellectual task a human can" definition. Even-odds chance of AGI before the end of the decade.MAN
AI Impacts — ML Researcher Survey
2047
Median 50% probability of "high-level machine intelligence" — shortened by 12 years from 2059 since the 2022 survey after ChatGPT's launch. The experts were surprised too.AII

The most striking data point: the AI Impacts 2022 researcher survey had a median AGI estimate of 2059. After ChatGPT launched in November 2022, the same researchers moved their median to 2047 — a 12-year compression in a single survey cycle. The people building AI systems are updating toward faster timelines in real time.AII

Risk estimates

The disagreement about timelines runs parallel to a disagreement about risk. These are not the same question — a system could arrive quickly and be safe, or slowly and catastrophically. But the people most confident about near timelines also differ sharply on what that nearness means for humanity.

Geoffrey Hinton
10–20%
Estimated probability of AI causing human extinction within 30 years. Left Google to warn about this.HIN
Yoshua Bengio
20%
Turing Award winner. Called for international AI governance comparable to nuclear weapons controls.80K
Yann LeCun
~0%
Calls existential risk fears "complete B.S." Argues current systems are nowhere near civilisation-level risk.LEC
Sam Altman
LOW
Believes risk is manageable and benefits — ending disease, creating abundance — outweigh any manageable downside.ALT

The median estimate from AI researchers surveyed by AI Impacts in 2023 was that there is a roughly 5–10% probability of AI causing civilisation-ending catastrophe within the next century. The variance is the story: on the single most consequential question about the technology, the experts building it disagree by an order of magnitude.AII

So what does this mean?

The 12-year compression of researcher estimates — from 2059 to 2047 in a single survey cycle — is the real signal. It's not the prediction that matters. It's the rate of change of the prediction.

When the people building AI are consistently wrong about timelines in the same direction (too slow), and their risk estimates range from 0% to 20% extinction probability, you're looking at a technology whose trajectory even its creators can't confidently predict. That uncertainty itself is actionable: prepare as if the optimists are right.

The most important data point is not any individual prediction — it's the direction of travel. Across virtually every major forecaster and research institution, AGI timeline estimates have been compressing since 2022. The question is not whether things are moving faster. It's how much faster.

Forecaster
Earlier estimate
Current estimate (2025–26)
AI Impacts Survey
Median 50% by 2059 (2022)
Median 50% by 2047 (2023) — 12-year compression after ChatGPTAII
Demis Hassabis
"At least a decade, maybe 10 years" (2023)
"Probably 3–5 years away" (Jan 2025). "50% by 2030" (June 2025)HAS
Geoffrey Hinton
"30–50 years" (before 2020)
"5–20 years" (2023 and since)HIN
Metaculus Community
Median 2041 for "first general AI" (Jan 2023)
Median 2031 (mid-2024) — 10-year compression in 18 monthsMETA
François Chollet
"15–25 years" (pre-2024)
"About five-ish years" (2025) after o3 solved 75% of ARC-AGI benchmarkCHO
So what does this mean?

Every major forecaster has compressed their timeline. Metaculus moved 10 years in 18 months. Hassabis went from "a decade" to "3–5 years." Chollet went from "15–25 years" to "five-ish." The trend line is clear: the people closest to the technology keep being surprised by how fast it moves.

If you're making career plans, business strategy, or investment decisions based on "AI is 10+ years away," you're using data that even its authors have abandoned. The planning horizon has narrowed dramatically — and it may narrow again.

2059 → 2047

ML researcher median shifted 12 years in a single survey cycle after ChatGPT launched

47%

Manifold Markets probability that AGI arrives before 2028 — near even odds

5–20 yrs

Hinton's range — even the worried can't agree on when it arrives

5 things you can do this week
to prepare for accelerating AI timelines.
1.

Subscribe to Veltrix Collective. The timelines are compressing faster than any single source can track. We synthesise the data weekly so you don't have to — tools, trends, and what actually matters, delivered every Tuesday.

2.

Test your job against AI right now. Open Claude or ChatGPT and try to get it to do the most knowledge-intensive part of your work. If it does 60%+ of the task, your role will change within 2 years. Start adapting now — not when the reorganisation memo lands.

3.

Build an AI workflow for one recurring task. Pick something you do weekly — research, reporting, email drafting, data analysis — and build a repeatable prompt or n8n/Make automation around it. Copilot for code, Claude for writing, Gemini for research. The goal is fluency, not mastery.

4.

Follow the definition, not the headline. When you see "AGI by 2027," ask whose definition. Altman's commercial milestone and Hassabis's scientific discovery bar are completely different targets. The headline obscures the nuance that matters. Read the source, not the summary.

5.

Plan for the fastest plausible timeline. The Metaculus crowd median moved from 2041 to 2031 in 18 months. Plan your career, your business, and your skills as if the shortest credible estimate is correct. If it's wrong, you'll be over-prepared. If it's right, you'll be ready.

The experts moved their estimates by 12 years in a single survey cycle. Subscribe to Veltrix Collective — because the timeline is moving faster than the headlines.

Source references

ALT
Sam Altman — "Reflections" blog post, January 2025"Confident we know how to build AGI." OpenAI/$100B profit definition. Bloomberg interview on AGI "during Trump's term."blog.samaltman.com →
GS
Sam Altman — "The Gentle Singularity" blog post, May 20252025–2027 milestone timeline: agents → novel insights → physical-world robots.blog.samaltman.com →
AMO
Dario Amodei — "Machines of Loving Grace" + January 2025 interviews2026–2027 prediction. "Country of geniuses in a data centre." Potential + risk duality.darioamodei.com →
HAS
Demis Hassabis — WIRED June 2025 + Axios AI Summit December 2025"50% chance by 2030." "Two or three big innovations needed." Nobel Prize context (AlphaFold).wired.com →
LEC
Yann LeCun — Meta posts and interviews 2024–2025LLMs "will never reach human-level intelligence." Needs new architectures. Turing Award winner (2018).
HIN
Geoffrey Hinton — Resignation interviews 20235–20 year timeline (down from 30–50). 10–20% extinction risk estimate. Advocates UBI.
AII
AI Impacts — "2023 Expert Survey on Progress in AI"Median 2047, down from 2059 in 2022. 5–10% civilisation-ending catastrophe probability. 12-year compression.aiimpacts.org →
META
Metaculus.com — Crowd forecasting platformMedian 2031 for "first general AI" (10,000+ forecasters). 2027 for "weakly general AI." Compressed from 2041 in Jan 2023.metaculus.com →
MAN
Manifold Markets — Prediction market47% probability AGI before 2028. Subjective "any intellectual task a human can" definition.manifold.markets →
CHO
François Chollet — ARC Prize updates 2025"About five-ish years" (down from 15–25). o3 solved 75% of ARC-AGI benchmark.arcprize.org →
80K
80,000 Hours — "When Will AGI Arrive?" March 2025Timeline compression analysis. Bengio's 20% extinction risk and governance position.80000hours.org →
CYB
Cybernews/Horizon3 — Builder definitions and timelines comparison, 2025Compilation of expert definitions and timeline statements.
Veltrix Collective
The timeline is
moving. Are you?

Weekly data briefings on AI tools, timelines, and what actually matters. No hype. No hedging. Just the data you need to make decisions before everyone else does.

Weekly, every Tuesday · No spam · Privacy policy · Unsubscribe anytime
Data synthesis as of March 2026. Expert predictions are drawn from public statements, interviews, and papers cited above. Forecasting platform data reflects snapshots at time of writing. Timeline estimates are inherently uncertain — treat as directional indicators, not precise predictions. Hover any TAG inline for source context, or see the reference key above.
Written by Luke Madden, founder of Veltrix Collective. Data synthesis and analysis by Vel.