VeltrixVeltrix.
← All articles
25 / 62March 22, 2026

What Is AI Hallucination? Why AI Makes Things Up and How to Catch It

AI hallucination defined — why it happens, the three types, how often leading LLMs hallucinate, and how to reduce it with RAG and grounding.

An AI hallucination is a confident, fluent, false statement generated by a language model. The model isn't malfunctioning when it hallucinates — it's doing exactly what it was trained to do. The problem is that "generate the most likely next token" doesn't require the output to be true.

In February 2023, a lawyer named Steven Schwartz submitted a 10-page legal brief to a federal court that cited six cases as precedents. All six cases were fabricated. ChatGPT had invented plausible-sounding case names, courts, judges, and legal reasoning — and presented them with perfect confidence. The judge sanctioned both Schwartz and his firm. MATA

This case became famous because it involved lawyers and courts. But the same phenomenon happens constantly in lower-stakes contexts: wrong statistics, invented expert quotes, non-existent product features described in product descriptions, fabricated historical events. The outputs are fluent, formatted correctly, and completely wrong.

6

non-existent legal cases cited in the Mata v. Avianca brief, all generated by ChatGPT MATA

27%

of ChatGPT responses in one 2023 study contained at least one factual error VEC

3%

hallucination rate for Claude on the same benchmark — among the lowest of major models VEC

LLMs don't retrieve facts from a database. They predict the most statistically likely next token. Those are fundamentally different operations — and the difference explains why hallucinations are an inherent property of the architecture, not a bug to be patched.

When you ask an LLM a factual question, it doesn't look up the answer. It generates text that, given the question, resembles what a correct answer would look like — based on patterns in training data. If the training data contained many examples of correct answers to similar questions, the model likely produces correct output. If the question is about something obscure, recent, or where the training data had limited coverage, the model generates plausible-sounding text that may be completely wrong.

The confidence is structural. LLMs were trained on text where authoritative sources state things confidently. A Wikipedia article about a historical figure doesn't hedge its claims with "I think" — it states facts. The model learned to produce text that sounds authoritative, because that's what it saw. There's no separate truth-verification step in the generation process. JI

Specific triggers that increase hallucination risk: asking about very recent events (post-training cutoff), asking for specific statistics or citations, asking about highly specific technical or medical facts, asking about obscure entities, asking leading questions with false premises embedded.

Factual errors

The model states an incorrect fact confidently. Wrong dates, wrong statistics, wrong attribution, wrong cause-and-effect. The most common type.

"The Eiffel Tower was completed in 1893" (it was 1889)
Citation fabrication

The model invents a plausible-sounding source — paper title, journal, author, year — that doesn't exist. Particularly dangerous in academic and legal contexts.

"According to Smith et al. (2021) in the Journal of Applied Psychology..." — no such paper exists
Reasoning failures

The model produces logical steps that look valid but aren't. Errors in the reasoning chain that lead to wrong conclusions, without any factual claims being individually false.

Multi-step maths problems, legal reasoning, complex conditional logic — the model can follow incorrect steps confidently

Hallucination rates vary significantly by model and task. Summarisation and paraphrase hallucination rates differ from open-domain QA hallucination rates. These figures come from the Vectara hallucination leaderboard, which measures summarisation faithfulness — one standardised metric among several.

Model Hallucination rate (summarisation) Notes
Claude 3.x (Anthropic) ~3–5% Consistently lowest hallucination rates in major benchmarks; Constitutional AI training helps
GPT-4o (OpenAI) ~7–12% Strong accuracy on well-represented topics; higher rates on obscure facts
Gemini 2.0 Pro (Google) ~8–15% Web-search grounding reduces hallucinations on current events; higher on closed-context tasks
LLaMA 3.1 (Meta) ~10–18% Open weights; varies significantly by task; useful for domains where you can verify outputs
GPT-3.5 / older models ~25–40% Substantially higher hallucination rates than newer models; not recommended for factual tasks
What the numbers mean

Even the best models hallucinate on some tasks. A 3% hallucination rate sounds low until you're generating 10,000 pieces of content — that's 300 factual errors. Always match the verification overhead to the stakes of the task.

You can't eliminate hallucinations from LLMs, but you can reduce their impact substantially with workflow design.

1
Ground your prompts with source material

Instead of asking the model to recall facts, provide the source material in the prompt: "Based only on the following text, summarise the key findings." This is called Retrieval-Augmented Generation (RAG) at scale — giving the model its context rather than asking it to generate from memory. Hallucination rates drop dramatically when working from provided context.

2
Ask the model to flag uncertainty

Include in your prompt: "If you're not certain about any specific fact, say so explicitly. Don't invent details — indicate when you're uncertain." Modern models respond reasonably well to this instruction, though it doesn't eliminate the problem entirely.

3
Choose models with lower hallucination rates

For factually critical tasks, Claude consistently outperforms other major models on hallucination benchmarks. Use the model best suited to the task. For tasks requiring real-time accuracy, use Perplexity or a model with web search grounding — not a static LLM.

4
Build verification into high-stakes workflows

For legal research, medical content, or financial reporting: treat LLM output as a draft requiring human verification, not a final product. The efficiency gains come from the draft generation, not from skipping the review step. The organisations that've had the most public AI failures were the ones that removed the human review step.

5
Never trust AI citations without verification

This is non-negotiable. If an LLM cites a specific paper, article, statistic, or court case — verify it exists before publishing or submitting it anywhere. Use Google Scholar, PubMed, or a primary source. The Mata v. Avianca case should be pinned above every AI-assisted legal research workstation as a reminder.

Sources
MATA
Mata v. Avianca, Inc. — S.D.N.Y. 2023 (ChatGPT fake citations case)courtlistener.com
VEC
Vectara — LLM Hallucination Leaderboard, 2023-2024github.com/vectara/hallucination-leaderboard
JI
Ji et al — Survey of Hallucination in Natural Language Generation, 2022arxiv.org/abs/2202.03629
Hallucination is a structural property of LLMs.
Design your workflows accordingly.

AI hallucination won't be "fixed" in the same way a bug gets patched, because it emerges from the fundamental mechanism by which LLMs generate text. Models are getting better — hallucination rates have dropped significantly from 2022 to 2026 — but they'll always exist to some degree.

The right response is workflow design, not fear. Treat AI output as a capable first draft that requires verification for any factual claim, citation, or statistical data. For creative work, summarisation, and brainstorming, hallucination risk is low. For legal, medical, financial, and academic work, build the verification step in from the start. The efficiency gains from AI assistance are still enormous even with that step included.

See how models compare

Hallucination rates
and benchmark data.

Compare every major LLM on accuracy, reasoning, coding, and more. Updated with every significant model release.

See LLM comparisons →

Veltrix Collective · Sources: Mata v. Avianca (2023), Vectara Hallucination Leaderboard, Ji et al (2022). Published April 2026. Hallucination rates are task-dependent and benchmark-specific — actual rates vary by use case.

Written by Luke Madden, founder of Veltrix Collective. Data synthesis and analysis by Vel.