1. Role prompting
When: You need expertise, a specific perspective, or a defined voice
Ask the model to take on a role. This frames the response in the knowledge, vocabulary, and perspective of that role — more effective than just asking for "expert" information.
Example
You are a senior employment lawyer specialising in UK workplace law. A small business owner has just asked you: "Can I fire an employee who's been with me for 18 months for poor performance?" Give practical, accurate advice with the relevant legal considerations they need to know.
2. Chain-of-thought
When: Complex problems, multi-step reasoning, maths, analysis
Ask the model to reason step-by-step before giving an answer. Dramatically improves accuracy on reasoning tasks — the model catches its own errors when it works through them explicitly. Simply adding "think step by step" to a prompt is a reliable improvement.
Example
Our website gets 10,000 visitors/month. We convert 2% to email subscribers. Of those, 8% buy our £49 course. If we improved conversion to email subscribers to 3%, what would the monthly revenue impact be? Think through this step by step.
3. Few-shot examples
When: You need a specific format, style, or pattern consistently
Show the model 2-3 examples of the output you want before asking it to produce one. Far more effective than describing what you want in abstract — the model infers the pattern from examples. Particularly powerful for classification tasks, formatting, and style matching.
Example
Classify these support tickets as Urgent/Normal/Low.
Example 1: "Can't login, needed for board presentation in 30 mins" → Urgent
Example 2: "How do I export my data to CSV?" → Normal
Example 3: "The font on the invoice looks a bit small" → Low
Now classify: "Payment failed when trying to upgrade, need receipt for expenses today"
4. Structured output
When: You need machine-readable output or consistent formatting
Explicitly specify the format you want the output in — JSON, markdown table, numbered list, specific headings. Models follow format instructions reliably when they're explicit. For API use, asking for JSON output with a specified schema dramatically simplifies downstream processing.
Example
Extract the following from this job description and return as JSON:
{
"job_title": string,
"company": string,
"salary_range": string or null,
"required_skills": [array of strings],
"years_experience": number or null,
"remote": boolean
}
Job description: [paste here]
5. Constrained generation
When: You need to prevent specific outputs or enforce rules
Explicitly tell the model what NOT to do, as well as what to do. Models follow negative constraints well when stated clearly. Useful for maintaining brand voice, avoiding topics, or preventing common AI writing patterns (em dashes, filler phrases, sycophantic openers).
Example
Write a product description for this software.
Rules:
- Do NOT start with "Introducing" or "Meet your new"
- Do NOT use the word "seamless", "revolutionise", or "game-changing"
- Do NOT use bullet points — prose only
- Keep under 80 words
- Focus on the specific outcome, not features
6. Self-critique / reflection
When: You need higher quality output and accuracy
After getting an initial response, ask the model to review and improve its own output. Or ask it to check for specific failure modes before finalising. Models improve their outputs meaningfully when asked to self-critique — particularly for factual accuracy, logic, and completeness.
Example
Review your answer above. Check for:
1. Any factual claims that could be inaccurate
2. Any logical gaps or unstated assumptions
3. Anything important that was omitted
4. Any advice that could be misleading in edge cases
Then provide a revised version that addresses these issues.
The diminishing returns reality
Prompt engineering has genuine diminishing returns. The first 20% of prompting skill — being specific about format, audience, length, and goal — produces 80% of the quality improvement. The remaining 80% of prompting knowledge produces incremental gains that matter most in production systems (chatbots, automated pipelines, agent instruction design) where prompts run thousands of times. For casual AI use, "be specific about what you want and show examples" covers most situations. For building production AI applications, systematic prompt testing, version control, and evaluation frameworks matter as much as the prompts themselves.