VeltrixVeltrix.
← All articles
61 / 62April 9, 2026

What Is Prompt Engineering? The Complete Guide for 2026

What is prompt engineering? How to write better AI prompts using chain-of-thought, few-shot examples, role prompting, and structured output techniques. With real examples.

AI Skills / Prompting

What Is Prompt Engineering?

Prompt engineering is the craft of writing AI instructions that reliably produce useful outputs. It's part communication skill, part systems thinking — here's how it actually works.

A prompt is everything you send to an AI model: the instructions, the context, the examples, the constraints. Prompt engineering is the practice of deliberately designing prompts to maximise the quality, consistency, and reliability of AI outputs.

The term sounds technical, but the underlying skill is mostly communication: clearly specifying what you want, in a way the model can act on, with enough context to avoid wrong assumptions. Poor prompts produce poor outputs not because the AI is stupid, but because the instructions are ambiguous. Better instructions produce better results — often dramatically so.

As models get more capable, raw prompting skill matters somewhat less for casual use — but system prompts, agent instructions, and production AI application design still require genuine prompt engineering expertise. Anthropic, OpenAI, and Google all publish detailed prompting guides, which are the best primary references alongside the techniques below.

Weak prompt
Write me a blog post about AI.
No topic focus, no audience, no length, no tone, no angle, no format. The model will make every decision and most will be wrong for your purpose.
Strong prompt
Write a 600-word blog post for small business owners who are curious about AI but haven't used it yet. Tone: practical and encouraging, not hyped. Structure: brief intro, 3 concrete examples (time saving, customer service, marketing), and a simple first step to try. No jargon. End with a call to action linking to /subscribe.
Specifies audience, length, tone, structure, constraints, and end goal. The model can produce exactly what you need without guessing.
1. Role prompting
When: You need expertise, a specific perspective, or a defined voice
Ask the model to take on a role. This frames the response in the knowledge, vocabulary, and perspective of that role — more effective than just asking for "expert" information.
Example
You are a senior employment lawyer specialising in UK workplace law. A small business owner has just asked you: "Can I fire an employee who's been with me for 18 months for poor performance?" Give practical, accurate advice with the relevant legal considerations they need to know.
2. Chain-of-thought
When: Complex problems, multi-step reasoning, maths, analysis
Ask the model to reason step-by-step before giving an answer. Dramatically improves accuracy on reasoning tasks — the model catches its own errors when it works through them explicitly. Simply adding "think step by step" to a prompt is a reliable improvement.
Example
Our website gets 10,000 visitors/month. We convert 2% to email subscribers. Of those, 8% buy our £49 course. If we improved conversion to email subscribers to 3%, what would the monthly revenue impact be? Think through this step by step.
3. Few-shot examples
When: You need a specific format, style, or pattern consistently
Show the model 2-3 examples of the output you want before asking it to produce one. Far more effective than describing what you want in abstract — the model infers the pattern from examples. Particularly powerful for classification tasks, formatting, and style matching.
Example
Classify these support tickets as Urgent/Normal/Low. Example 1: "Can't login, needed for board presentation in 30 mins" → Urgent Example 2: "How do I export my data to CSV?" → Normal Example 3: "The font on the invoice looks a bit small" → Low Now classify: "Payment failed when trying to upgrade, need receipt for expenses today"
4. Structured output
When: You need machine-readable output or consistent formatting
Explicitly specify the format you want the output in — JSON, markdown table, numbered list, specific headings. Models follow format instructions reliably when they're explicit. For API use, asking for JSON output with a specified schema dramatically simplifies downstream processing.
Example
Extract the following from this job description and return as JSON: { "job_title": string, "company": string, "salary_range": string or null, "required_skills": [array of strings], "years_experience": number or null, "remote": boolean } Job description: [paste here]
5. Constrained generation
When: You need to prevent specific outputs or enforce rules
Explicitly tell the model what NOT to do, as well as what to do. Models follow negative constraints well when stated clearly. Useful for maintaining brand voice, avoiding topics, or preventing common AI writing patterns (em dashes, filler phrases, sycophantic openers).
Example
Write a product description for this software. Rules: - Do NOT start with "Introducing" or "Meet your new" - Do NOT use the word "seamless", "revolutionise", or "game-changing" - Do NOT use bullet points — prose only - Keep under 80 words - Focus on the specific outcome, not features
6. Self-critique / reflection
When: You need higher quality output and accuracy
After getting an initial response, ask the model to review and improve its own output. Or ask it to check for specific failure modes before finalising. Models improve their outputs meaningfully when asked to self-critique — particularly for factual accuracy, logic, and completeness.
Example
Review your answer above. Check for: 1. Any factual claims that could be inaccurate 2. Any logical gaps or unstated assumptions 3. Anything important that was omitted 4. Any advice that could be misleading in edge cases Then provide a revised version that addresses these issues.
The diminishing returns reality
Prompt engineering has genuine diminishing returns. The first 20% of prompting skill — being specific about format, audience, length, and goal — produces 80% of the quality improvement. The remaining 80% of prompting knowledge produces incremental gains that matter most in production systems (chatbots, automated pipelines, agent instruction design) where prompts run thousands of times. For casual AI use, "be specific about what you want and show examples" covers most situations. For building production AI applications, systematic prompt testing, version control, and evaluation frameworks matter as much as the prompts themselves.
Is prompt engineering still relevant as models get smarter?
Yes, but the nature of the skill shifts. Newer models like Claude 3.7 and GPT-4o need less hand-holding for simple tasks — they're better at inferring intent from brief instructions. But for complex agent systems, system prompt design, production reliability, and multi-step workflows, prompt engineering remains highly relevant. The jobs are shifting from "write prompts for simple tasks" toward "design reliable AI systems using prompts" — requiring more systems thinking and less basic prompting craft. The skill isn't going away; it's evolving.
Where can I learn more about prompt engineering?
Best free resources: Anthropic's prompt engineering documentation (docs.anthropic.com/en/docs/build-with-claude/prompt-engineering) is the most thorough and practical reference — model-specific and regularly updated. OpenAI's prompt engineering guide is similarly good. The Prompting Guide at promptingguide.ai aggregates research on prompting techniques with examples. For learning by doing: Anthropic's interactive prompt engineering course on their website is free and genuinely educational. For depth: the "Prompt Engineering Guide" paper (Sahoo et al. 2024) on arXiv covers research-backed techniques comprehensively.

Get AI insights every week

The AI Briefing covers what actually matters in AI — no hype, no jargon, just what you need to stay ahead.

Subscribe free
Written by Luke Madden, founder of Veltrix Collective. Data synthesis and analysis by Vel.