What Is a
Prompt in AI?
The instruction you give AI determines everything about what you get back. Same model, radically different outputs depending on how you ask. Here's how prompting works, what makes it good, and the techniques that get consistently better results.
00 — The definition
A prompt is any input you give an AI model. In practice, it's the instruction, question, or context that shapes what the AI generates. The quality of your prompt is the single biggest variable in the quality of your output — for any given model.
This matters more than most people realise. A 2023 study found that well-crafted prompts increased GPT-4's accuracy on complex reasoning tasks by up to 83% compared to basic prompts. PROM You're not changing the model. You're changing how much of its capability you're accessing.
Think of it this way: an LLM contains an enormous amount of compressed knowledge and capability. Your prompt is the key that unlocks specific parts of that capability. A vague key unlocks a vague door. A precise key gets you exactly where you need to go.
What you get: Generic template with placeholders. Formal tone. No specificity. You'll rewrite the whole thing.
What you get: A nearly complete draft. Specific benefit. Correct length. Matching tone. Minimal editing needed.
The improvement between those two prompts is typical. And it scales across every task: analysis, code generation, research, summarisation. The more specific context you provide, the better the model can focus its capabilities on exactly what you need.
01 — Five prompting frameworks that consistently work
These aren't tricks — they're structural patterns that align with how LLMs process information. Each is appropriate for different task types.
Give the model clear instructions with no examples. Works best for well-defined tasks the model has encountered frequently in training. The key is specificity: role, task, format, constraints, tone.
Provide 2-3 examples of the format or style you want before asking for the actual output. Dramatically improves format adherence and style matching. SHOT
Message: "My account has been locked and I can't access anything" → URGENT
Message: "When does the new feature launch?" → NORMAL
Message: "Can I change my notification preferences?" → LOW-PRIORITY
Now classify: "I was double-charged this month" →
Ask the model to work through a problem step by step before giving the final answer. Dramatically improves performance on maths, logic, multi-step analysis, and complex decisions. Adding "Let's think step by step" or "Reason through this" before complex questions works remarkably well. PROM
Assign the model a specific role, expertise level, or persona. This shifts the vocabulary, tone, and framing of responses toward that role's typical communication style. Works well for professional advice, technical explanations, and creative work.
Specify exactly what format you want the output in: JSON, markdown table, numbered list, specific headers, character limits. Modern LLMs follow format instructions reliably, which makes their output much easier to process downstream.
02 — The mistakes beginners make
Most poor AI outputs come from one of these four prompting errors. They're easy to avoid once you know what to look for.
"Write a blog post" or "Summarise this" gives the model no target to aim for. It defaults to generic.
Fix: Add topic, audience, length, tone, key points to include, and what to avoid.
The model will pick a format. It won't always pick the right one for your use case.
Fix: Explicitly request: "Respond as a numbered list", "Use markdown headers", "Plain text, no bullet points".
Treating prompting as input → output. Prompting is iterative. First output is rarely optimal.
Fix: Follow up with "Make it more concise", "Adjust the tone to be less formal", "Add a section on X".
Dumping enormous amounts of text without structure forces the model to decide what's relevant.
Fix: Front-load the most important context. Tell the model what to prioritise in the input you're providing.
A good prompt is a precise brief. Think of it like briefing a highly capable consultant who knows nothing specific about your situation: they need context, the goal, constraints, and the format you want. Give them that, and the output is good. Don't, and you get generic work.
that separates casual users from power users.
Two people using identical AI tools get vastly different results. The difference is almost never the model — it's the prompt. A vague instruction produces generic output. A specific, contextual, well-structured instruction produces something you can actually use.
The good news: prompting is a learnable skill. And unlike most skills, the feedback loop is immediate. Bad prompt, bad output — adjust. Good prompt, good output — save it, reuse it, build a library. The people getting the most value from AI tools aren't the ones with the most technical background. They're the ones who've spent time developing their prompting intuition.
High-performance prompts
for every task.
Browse our curated library of tested, high-performance prompts across writing, analysis, coding, research, and more.
Explore prompt library →