What Is
Generative AI?
How it works, what it creates, where it falls short, and the best tools in each category. A practical guide for people who need to understand it — not just a definition.
00 — The definition that actually makes sense
Generative AI is AI that produces new content — text, images, audio, video, code — rather than classifying or analysing existing content. That distinction matters more than most explainers acknowledge.
Traditional AI is mainly about prediction and classification. A spam filter classifies email. A fraud detection system predicts whether a transaction is suspicious. A recommendation algorithm predicts what you'll want to watch next. These systems work on existing data and produce a label, a score, or a ranked list. They don't create.
Generative AI does something fundamentally different: it produces new artifacts that didn't exist before. Write a product description. Generate a photorealistic image of a concept that's never been photographed. Compose music in the style of a specific artist. Write and run code to analyse a dataset. These are generative tasks — and they became practically possible at scale only after 2022, when transformer architectures reached sufficient scale and training sophistication.
The market reflects this shift. Global generative AI investment reached $33.9 billion in 2023, up from $2.7 billion in 2019. GS By 2026, it's the most heavily funded segment of the AI industry.
generative AI investment in 2023, up from $2.7B in 2019 GS
projected economic value from generative AI by 2032 BLOOM
of knowledge worker tasks can be augmented by generative AI tools MCK
the year generative AI crossed the practical usability threshold, with DALL-E 2, Stable Diffusion, and ChatGPT all launching within months
01 — Generative AI vs traditional AI: the key difference
Most people encounter both types daily and don't realise they're different things. Here's the distinction.
| Aspect | Traditional AI | Generative AI |
|---|---|---|
| What it does | Classifies, predicts, or ranks existing content | Creates new content that didn't exist before |
| Output type | Label, score, recommendation, decision | Text, image, audio, video, code, data |
| Common examples | Spam filter, fraud detection, Netflix recommendations, face recognition | ChatGPT, Midjourney, GitHub Copilot, Suno AI |
| Training goal | Minimise prediction error on labelled data | Learn the underlying distribution of content and generate samples from it |
| Failure mode | Wrong predictions, biased classifications | Hallucinations, factual errors, copyright issues |
| Human role | Review decisions the model makes | Direct the model's creation and evaluate output quality |
Both types are "AI" in the broad sense, but they require completely different mental models for using them well. Traditional AI systems are mostly invisible infrastructure — they're making decisions behind the scenes. Generative AI is interactive — you prompt it, it creates, you evaluate and iterate.
02 — The 6 output categories: what each produces and the leading tools
Generative AI isn't one thing. The underlying architectures and the tools built on them differ significantly by output type.
The most mature category. LLMs can write, summarise, translate, analyse, and converse across virtually any domain. Quality is high for most tasks; hallucinations remain the key limitation.
Diffusion models produce photorealistic and stylised images from text prompts. Quality has improved dramatically since 2022 — hands, text, and lighting are now reliably rendered. Copyright remains contested.
Code generation has seen the most measurable productivity impact of any GenAI category. A 2023 MIT study found developers completed tasks 55.8% faster with AI assistance. MIT
Music generation (Suno, Udio) and voice synthesis (ElevenLabs) reached commercial quality in 2024. Voice cloning is both a tool and a fraud risk — expect regulation.
The least mature category but progressing fastest. Sora produced 60-second photorealistic video clips in 2024. By 2026, 5-minute coherent narratives are achievable. Temporal consistency is still imperfect.
LLMs can generate structured JSON, CSV, or analytical outputs from unstructured inputs. Synthetic data generation for training other AI models is a significant enterprise use case.
These categories converge. GPT-4o is multimodal — it reads images, generates text, and can analyse data in the same conversation. The future is integrated: prompt a single model, get text, image, code, and analysis back in one response.
03 — The limitations that actually matter
Generative AI is genuinely impressive. It's also genuinely limited in specific ways. Knowing these saves you from both overuse and misuse.
Text models generate statistically plausible content, not verified facts. They can invent citations, produce wrong statistics, and state false things confidently. Always verify factual claims in high-stakes contexts.
Models trained on copyrighted content can reproduce elements of that training data. The legal landscape is actively shifting — several cases are in courts in 2026. Understand your organisation's risk tolerance before deploying GenAI for commercial content.
The same model produces wildly different quality outputs depending on how you prompt it. Expert users get dramatically better results than novices from identical models. This creates a skill gap that compounds over time.
GenAI excels at the bulk of content production but often requires human review, editing, and judgment at the end. The closer to final publication or deployment, the more human oversight matters.
None of these are reasons not to use generative AI. They're reasons to use it with appropriate oversight. The organisations that are getting the best results aren't the ones using it blindly — they're the ones who've built clear workflows that include human review at the right checkpoints.
It's a category that's still being defined.
The tools that feel bleeding-edge today — Midjourney, Sora, GPT-4o — will be as mundane as Google Docs in five years. What matters now is understanding what each category does well, where it fails, and how to integrate it into work that matters.
The businesses getting the most from generative AI aren't the ones that deployed it fastest. They're the ones who understood it well enough to deploy it in the right places. That understanding starts with distinguishing between what generative AI is genuinely good at and what still needs humans.
AI intelligence,
weekly.
Every week: the AI developments that matter, the tools worth trying, and the data behind the headlines. No hype. No filler.
Subscribe free →