VeltrixVeltrix.
← All articles
17 / 62March 18, 2026

What Is AGI? Why Every Major AI Lab Is Racing to Build It (and What That Means for You)

AGI defined, where current AI falls short, what OpenAI and Anthropic say publicly about timelines, and why it matters for the economy and safety.

AGI — artificial general intelligence — is AI that can perform any intellectual task a human can perform, without being specifically trained for each task. The key word is "general." Everything we have today is narrow.

Current AI systems are impressive but deeply specialised. GPT-4o can write essays, debug code, and analyse images. But it can't drive a car, perform surgery, or adapt to a task it's never encountered in any form. AlphaGo defeated the world's best Go player — but it can't play chess. AlphaFold predicts protein structures with extraordinary accuracy — but it can't do anything else. Every AI system in production today is narrow AI: expert at its specific domain, useless outside it.

AGI is different in kind, not degree. It's not "a better ChatGPT." It's a system capable of generalising across arbitrary tasks — learning something in one domain and applying that learning to a completely different domain, the way humans do constantly without thinking about it.

The clearest definition comes from Anthropic: AGI would be "an AI system that is broadly as cognitively capable as a human across all intellectual domains." ANTH OpenAI defines it as "AI systems that are generally smarter than humans." OAI Both definitions are contested — there's no agreed technical benchmark for AGI, which is itself a significant problem for the field.

Understanding the gap isn't pessimism — it's necessary context for interpreting the claims coming out of AI labs every month.

Capability Human performance Current best AI AGI threshold
Task transfer Trivially transfers learning across domains Requires domain-specific training; limited transfer Not reached
Causal reasoning Builds causal mental models natively Pattern matching, not causal understanding Not reached
Long-horizon planning Plans weeks, months, years ahead Limited to context window; degrades quickly Partially reached
Common sense Implicit, vast, and reliable Frequently fails on edge cases Not reached
Self-directed learning Humans seek information, update beliefs Requires re-training; can't self-update reliably Not reached
Narrow task performance Variable; exceptional in trained domains Superhuman in specific benchmarks Exceeded in many areas
Physical world interaction Full embodied cognition Robotic systems improving but brittle Not reached

OpenAI claimed in January 2025 that o3 had achieved "human-level performance across cognitive tasks" on several benchmarks. OAI But benchmark performance on curated test sets isn't the same as general intelligence. ARC-AGI benchmark scores improved dramatically — yet the tests themselves measure specific reasoning patterns, not the full breadth of human cognition. The lab's own researchers caution against interpreting these as AGI evidence.

Stripping the press releases: here's what the lab founders and researchers say in their own words.

Sam Altman — OpenAI CEO

"We may be only a few thousand days away from AGI... we are going to live in a world with fantastically more intelligence available."

Blog post, February 2025. ALTM

Dario Amodei — Anthropic CEO

"I think there's a real chance we're going to build something very close to AGI in the next few years... the pace of progress has been shocking even to those of us in the field."

Lex Fridman Podcast, 2024. DARIO

Demis Hassabis — Google DeepMind CEO

"We may be approaching something like AGI within the decade... but we need to be extremely careful about how we develop it — the alignment problem is real."

Wall Street Journal interview, 2024. DMD

What this actually means

All three of the most credible AI lab CEOs believe AGI is years away, not decades. That's a significant shift from the consensus five years ago. And all three also emphasise that the risks of getting it wrong are catastrophic.

The uncomfortable reality: these are the people building it, with the most access to what's happening inside the models, and they're all sounding both excited and genuinely worried. That combination deserves to be taken seriously.

Timeline predictions have a terrible track record in AI. But the rate of capability improvement since 2022 is different from anything that came before it.

2023
GPT-4 surprises researchers

GPT-4 passes bar exam (90th percentile), SAT (93rd percentile), AP exams. AI researchers expected these milestones 5+ years later. OAI

2024
Reasoning models arrive

OpenAI o1 and Anthropic Claude 3.7 with extended thinking demonstrate multi-step reasoning that dramatically improves on prior models. ARC-AGI scores jump from under 10% to over 85%. ARC

2025
Agentic systems demonstrate autonomy

Models complete multi-hour software engineering tasks, conduct research, and operate computers autonomously. Devin, Operator, and Claude Computer Use mark a shift toward agency.

2026-2028
The credible window

Most AI researchers surveyed in 2024 placed >50% probability on AGI arriving by 2030. AIIM The "few years" claims from Altman and Amodei point to this window. Nothing is certain, and "AGI" remains undefined enough that any claim is contestable.

When will AGI arrive?

Nobody knows. Expert estimates range from "already here in a limited sense" (some OpenAI researchers) to "decades away" (sceptics like Gary Marcus). A 2024 survey of ML researchers found a median estimate of 2047 for AGI, but with enormous variance. AIIM The rate of capability improvement since 2022 has made earlier estimates obsolete in both directions — things moved faster than pessimists predicted, but some claimed milestones didn't generalise as hoped.

Has OpenAI already built AGI?

OpenAI has not announced AGI. Some researchers inside the lab have suggested o3 demonstrated "AGI-like" capabilities on specific benchmarks — but the company has not made an official AGI declaration. Under OpenAI's charter, the board would be notified if AGI was reached; that notification has not happened publicly. The definitional ambiguity is doing a lot of work here: if you define AGI narrowly (human-level on a set of benchmarks), you can claim we're close. If you define it broadly (genuine general reasoning), we're not there.

Is AGI dangerous?

This is a genuinely contested question. The "alignment problem" — ensuring that a superintelligent AI pursues goals that are beneficial to humans — is considered an important unsolved problem by researchers including those at Anthropic, DeepMind, and Miri. The risk isn't the science-fiction scenario of a robot that decides to exterminate humans. The more technically grounded concern is an AI system that pursues its given objective in ways humans didn't intend, at a scale and speed that makes course correction difficult. Whether this constitutes an "existential risk" or a "manageable technical problem" is where the serious debate lies.

Sources
OAI
OpenAI — Our Approach to AGI Safetyopenai.com
ANTH
Anthropic — Core Views on AI Safety, 2024anthropic.com
ALTM
Sam Altman — The Intelligence Age, 2024ia.samaltman.com
DARIO
Dario Amodei — Machines of Loving Grace, 2024darioamodei.com
AIIM
AI Impacts — 2024 Expert Survey on Progress in AIaiimpacts.org
ARC
ARC Prize — ARC-AGI Benchmark Results, 2024arcprize.org
AGI is the most consequential question in tech.
And nobody actually knows the answer yet.

The people building the most advanced AI systems believe they're years from AGI, not decades. That belief is based on internal knowledge we don't have access to. Whether their optimism is warranted or premature remains to be seen — but the capabilities we can observe keep crossing thresholds faster than anticipated.

The reasonable position is this: something significant is happening, the pace is accelerating, the implications are enormous, and the uncertainty is genuine. Neither dismissal nor panic serves you. What serves you is following the developments closely enough to understand what's real and what's hype, as the evidence develops.

Stay ahead of the curve

AI intelligence,
weekly.

Every week: the AI developments that matter, the tools worth trying, and the data behind the headlines. No hype. No filler.

Subscribe free →

Veltrix Collective · Sources: OpenAI, Anthropic, Sam Altman, Dario Amodei, AI Impacts, ARC Prize. Published April 2026. AGI remains undefined by any accepted technical standard. All claims about timelines are contested.

Written by Luke Madden, founder of Veltrix Collective. Data synthesis and analysis by Vel.