VeltrixVeltrix.
← All articles
19 / 62March 19, 2026

What Is a Prompt in AI? How to Write Better Prompts and Get Better Results

5 prompt frameworks that work — zero-shot, few-shot, chain-of-thought, role-play, structured output — and the common mistakes beginners make.

A prompt is any input you give an AI model. In practice, it's the instruction, question, or context that shapes what the AI generates. The quality of your prompt is the single biggest variable in the quality of your output — for any given model.

This matters more than most people realise. A 2023 study found that well-crafted prompts increased GPT-4's accuracy on complex reasoning tasks by up to 83% compared to basic prompts. PROM You're not changing the model. You're changing how much of its capability you're accessing.

Think of it this way: an LLM contains an enormous amount of compressed knowledge and capability. Your prompt is the key that unlocks specific parts of that capability. A vague key unlocks a vague door. A precise key gets you exactly where you need to go.

Weak prompt
Write a marketing email.

What you get: Generic template with placeholders. Formal tone. No specificity. You'll rewrite the whole thing.

Strong prompt
Write a 150-word email to existing customers announcing our new AI time-tracking feature. Tone: friendly, direct, benefits-first. Key benefit: saves 2 hours/week of manual entry. CTA: book a 15-min demo. Don't use exclamation marks.

What you get: A nearly complete draft. Specific benefit. Correct length. Matching tone. Minimal editing needed.

The improvement between those two prompts is typical. And it scales across every task: analysis, code generation, research, summarisation. The more specific context you provide, the better the model can focus its capabilities on exactly what you need.

These aren't tricks — they're structural patterns that align with how LLMs process information. Each is appropriate for different task types.

ZERO-SHOT Direct instruction

Give the model clear instructions with no examples. Works best for well-defined tasks the model has encountered frequently in training. The key is specificity: role, task, format, constraints, tone.

You are a senior financial analyst. Summarise the following earnings report in 3 bullet points: what beat expectations, what missed, and what guidance suggests for next quarter. Keep each bullet under 30 words. [REPORT TEXT]
FEW-SHOT Examples in the prompt

Provide 2-3 examples of the format or style you want before asking for the actual output. Dramatically improves format adherence and style matching. SHOT

Classify these customer messages as: URGENT, NORMAL, or LOW-PRIORITY.

Message: "My account has been locked and I can't access anything" → URGENT
Message: "When does the new feature launch?" → NORMAL
Message: "Can I change my notification preferences?" → LOW-PRIORITY

Now classify: "I was double-charged this month" →
CHAIN-OF-THOUGHT Step-by-step reasoning

Ask the model to work through a problem step by step before giving the final answer. Dramatically improves performance on maths, logic, multi-step analysis, and complex decisions. Adding "Let's think step by step" or "Reason through this" before complex questions works remarkably well. PROM

I need to decide whether to hire a contractor or full-time employee for this marketing role. Think through this step by step, considering: cost differences (salary vs contractor rate), flexibility needs, time to productivity, legal considerations, and company stage. Then give a final recommendation with reasoning. [Role details and budget]
ROLE PROMPTING Persona assignment

Assign the model a specific role, expertise level, or persona. This shifts the vocabulary, tone, and framing of responses toward that role's typical communication style. Works well for professional advice, technical explanations, and creative work.

You are a senior UX designer reviewing a product specification. Your job is to identify usability problems a first-time user would encounter. Be specific and actionable. Flag the top 5 issues, ordered by severity. [SPEC]
STRUCTURED OUTPUT Format specification

Specify exactly what format you want the output in: JSON, markdown table, numbered list, specific headers, character limits. Modern LLMs follow format instructions reliably, which makes their output much easier to process downstream.

Analyse this job description and return a JSON object with: {"role_title": string, "seniority": "junior|mid|senior", "remote_friendly": boolean, "required_skills": [string], "nice_to_have_skills": [string], "salary_range_mentioned": boolean}. Only include fields where you have clear evidence from the text. [JOB DESCRIPTION]

Most poor AI outputs come from one of these four prompting errors. They're easy to avoid once you know what to look for.

Being too vague

"Write a blog post" or "Summarise this" gives the model no target to aim for. It defaults to generic.

Fix: Add topic, audience, length, tone, key points to include, and what to avoid.

Not specifying format

The model will pick a format. It won't always pick the right one for your use case.

Fix: Explicitly request: "Respond as a numbered list", "Use markdown headers", "Plain text, no bullet points".

One-shot and done

Treating prompting as input → output. Prompting is iterative. First output is rarely optimal.

Fix: Follow up with "Make it more concise", "Adjust the tone to be less formal", "Add a section on X".

Ignoring context limits

Dumping enormous amounts of text without structure forces the model to decide what's relevant.

Fix: Front-load the most important context. Tell the model what to prioritise in the input you're providing.

The underlying principle

A good prompt is a precise brief. Think of it like briefing a highly capable consultant who knows nothing specific about your situation: they need context, the goal, constraints, and the format you want. Give them that, and the output is good. Don't, and you get generic work.

Sources
PROM
Wei et al — Chain-of-Thought Prompting Elicits Reasoning in LLMs, 2022arxiv.org/abs/2201.11903
SHOT
Brown et al — Language Models are Few-Shot Learners (GPT-3 paper), 2020arxiv.org/abs/2005.14165
Prompting is the skill gap
that separates casual users from power users.

Two people using identical AI tools get vastly different results. The difference is almost never the model — it's the prompt. A vague instruction produces generic output. A specific, contextual, well-structured instruction produces something you can actually use.

The good news: prompting is a learnable skill. And unlike most skills, the feedback loop is immediate. Bad prompt, bad output — adjust. Good prompt, good output — save it, reuse it, build a library. The people getting the most value from AI tools aren't the ones with the most technical background. They're the ones who've spent time developing their prompting intuition.

Ready-to-use prompts

High-performance prompts
for every task.

Browse our curated library of tested, high-performance prompts across writing, analysis, coding, research, and more.

Explore prompt library →

Veltrix Collective · Sources: Wei et al (2022), Brown et al (2020). Published April 2026.

Written by Luke Madden, founder of Veltrix Collective. Data synthesis and analysis by Vel.