🧠 All Things AI
Beginner

Prompt Anatomy

Every effective prompt is built from the same six components. Understanding these building blocks — and how to combine them — is the single most practical skill you can develop for working with AI. This page breaks down each component, shows you the recommended order, and teaches you how to avoid the mistakes that trip up beginners.

The Six Components

Think of a prompt as a specification document. The more clearly you specify what you want, the better the result. Every prompt — from a one-line question to a complex multi-paragraph instruction — is built from some combination of these six parts:

PromptA complete AI prompt specification
rolestringWho the AI should be — persona and expertise domain
contextstringBackground, documents, business rules the AI needs to know
task*stringWhat the AI should DO — the action verb and goal
constraintsstring[]Boundaries — length, scope, tone, safety, edge cases
output_formatstringHow the answer should be structured — JSON, bullets, table, prose
examplesExample[]1-3 input/output pairs showing what 'good' looks like (few-shot)

You do not need all six in every prompt. A casual question like "What is photosynthesis?" only uses the Task component. But as your tasks get more complex, adding more components dramatically improves the quality and consistency of the response.

1. Role / Persona

The role tells the AI who it should be. It anchors the model in a specific expertise domain, vocabulary level, and perspective. Without a role, the AI defaults to a generic "helpful assistant" — which is fine for simple questions but produces unfocused results for specialized tasks.

Example roles:

  • "You are a senior data engineer with 10 years of experience in ETL pipelines."
  • "You are a patient, encouraging tutor teaching a 10-year-old about fractions."
  • "You are a technical writer creating documentation for a REST API."
  • "You are a financial analyst evaluating startup pitch decks."

Why it works: The role narrows the model's decision space. A "senior data engineer" will use different terminology, assumptions, and trade-off analysis than a "product manager" — even when answering the exact same question.

  • Be specific about expertise level — "senior" vs. "junior" produces noticeably different output
  • Include years of experience or domain specialization to further ground the response
  • Match the role to your actual need — don't use "expert physicist" when you need a simple explanation

2. Task / Instruction

The task is the verb of your prompt — what you want the AI to actually do. This is the most important component. A prompt without a clear task is like asking someone to help you without telling them what you need help with.

Vague task:

"Tell me about this article."

Specific task:

"Summarize the following article in 3 bullet points, focusing on the key findings and their implications for healthcare providers."

  • Start with an action verb: "Summarize," "Analyze," "Compare," "Generate," "Rewrite," "Explain"
  • Specify the audience: "for a technical reader" vs. "for a 5th grader"
  • Define success criteria: what does a good response look like?
  • If the task is complex, break it into numbered steps

3. Context

Context is the background information the AI needs to do its job well. This is where most quality gains come from. An AI with relevant context produces dramatically better results than one operating on its training data alone.

Types of context you can provide:

  • Documents — articles, reports, emails, code files for the AI to work with
  • Background — "We are a B2B SaaS company selling to healthcare providers"
  • User info — "The reader is a non-technical executive"
  • Prior decisions — "We already decided to use PostgreSQL for the database"
  • Motivation — "This summary will be used in a board presentation"
  • Do not assume the AI "knows" your business rules, abbreviations, or internal jargon — spell them out
  • For long documents, place the context before the instruction
  • Tell the AI why you need something — it changes the output significantly

4. Constraints

Constraints set boundaries on what the AI should and should not do.

Subjective constraint (unreliable):

"Keep it short."

Objective constraint (reliable):

"Limit your response to 3 sentences, each under 20 words."

Critical best practice: Tell the model what to do, not just whatnot to do. Instead of "Do not use bullet points," say "Write in flowing prose paragraphs." Positive instructions are more reliable because they give the model a clear target.

5. Examples (Few-Shot)

Examples are the most powerful calibration tool you have. They show the model exactly what you want — the tone, format, depth, and style — in a way that abstract instructions alone cannot match. This is called few-shot prompting.

Example of few-shot prompting:

Task: Convert meeting notes into action items.
Example input:
"Discussed Q3 targets. Sarah will prepare the budget by Friday."
Example output:
- [ ] Sarah: Prepare Q3 budget (due: Friday)
Now process this:
"Reviewed product roadmap. Mike to finalize API docs next week."
  • 1-3 examples is the sweet spot — enough to establish the pattern, not so many that you waste tokens
  • Only show desired behavior — do not include anti-patterns
  • Make your examples diverse enough to cover edge cases but consistent in format

6. Output Format

The output format tells the AI how to structure its response. This is especially important when the output will be parsed by another system.

Common output formats:

  • Bullet points — "Respond with a bulleted list of 5-7 items"
  • Numbered steps — "Provide step-by-step instructions, numbered 1 through N"
  • JSON — "Return a JSON object with keys: title, summary, tags"
  • Markdown table — "Present the comparison as a markdown table with columns: Feature, Tool A, Tool B"
  • Prose paragraphs — "Write 2-3 paragraphs of flowing prose, no bullet points"

The order in which you arrange these components matters. Here is the consensus structure recommended by OpenAI, Anthropic, and Google for their respective models:

Role
Set the persona first
Context
Background + documents
Task
The specific instruction
Constraints
Boundaries and rules
Format
Output structure
Examples
Show desired input/output

For long documents: place content BEFORE the task. Repeat critical instructions at the end — models attend most to beginning and end.

A Full Example

Here is a complete prompt using all six components:

Role: You are a senior product manager with experience in B2B SaaS companies.
Context: We are building an internal tool for our sales team to track customer onboarding. The team uses a shared spreadsheet that is error-prone. We have 50 sales reps and onboard ~200 customers per month.
Task: Write a product requirements document (PRD) for the first version of this onboarding tracker.
Constraints: Focus on the MVP — maximum 8 features. No mobile app features. Must integrate with Salesforce. Assume a 2-month timeline with 3 engineers.
Format: Structure: Problem Statement, User Personas, Feature List (table: Feature | Priority | Description | Acceptance Criteria), Technical Requirements, Success Metrics.
Example feature row: | Onboarding checklist | P0 | Step-by-step checklist per customer | All steps visible on one screen; rep can mark complete with one click |

System Prompts vs. User Prompts

System Prompt (Developer Message)

Set by the application developer. Persists across the entire conversation. Contains the role, behavioral rules, safety constraints, and output formatting defaults. Think of it as the AI's "job description."

User Prompt

Changes with each interaction. Contains the actual query, documents for analysis, and task-specific context.

  • System prompt: Role, persistent rules, safety constraints, tool definitions, output format defaults
  • User prompt: Specific task, documents, context for this query, examples relevant to this request
  • Claude note: Important instructions should appear in user messages too — Claude weights user messages highly

Using Delimiters to Structure Prompts

When your prompt has multiple sections, use delimiters to clearly separate them. This prevents the model from confusing your instructions with the content you want it to process.

Popular delimiter styles:

  • XML tags (preferred for Claude) — <context>...</context>, <task>...</task>
  • Markdown headings (universal) — ## Instructions, ## Context
  • Triple backticks (for code/data) — ```

Common Mistakes Beginners Make

1. Being too vague

"Write something about AI" gives the model no constraints. Specify the audience, format, length, tone, and purpose. Vagueness is the #1 cause of disappointing results.

2. Overloading a single prompt

Cramming five different tasks into one prompt. Break them into separate prompts or clearly numbered steps.

3. Assuming the AI knows your context

The AI does not know your company's acronyms, internal tools, or business rules unless you tell it. Always spell out what you would explain to a smart new hire.

4. Only saying what NOT to do

Always pair negative constraints with positive ones: "Write in plain English for a non-technical reader. Limit to 200 words. Explain concepts with analogies instead of code."

5. Not iterating

Treating the first prompt as final. Prompt engineering is iterative — test the result, identify what is wrong, adjust, and try again.

6. Using subjective constraints

"Keep it brief" is subjective. Use measurable constraints: "3 sentences," "under 150 words," "exactly 5 bullet points."

7. Showing anti-patterns in examples

If you include a "bad example," the model may still pick up on that pattern. Only demonstrate the behavior you do want.

How Different Models Handle Prompts

AspectGPT-4oClaudeGemini
Instruction followingVery literal — be explicitPrecise, follows closelyDirect, concise prompts work best
Best delimiterXML or MarkdownXML tags (trained on them)XML or Markdown headings
System prompt weightHigh priorityUser messages weighted moreSimilar to system prompts
Key tipRepeat key instructions at end for long promptsPut critical rules in user messages tooBe direct; avoid over-explanation

Checklist: Do You Understand This?

  • Can you name the six components of a prompt?
  • Can you explain why examples (few-shot) are often more powerful than instructions alone?
  • Can you rewrite a vague prompt to make it specific using all six components?
  • Do you know the difference between a system prompt and a user prompt?
  • Can you explain why "do this" is more reliable than "don't do that"?
  • Can you describe how Claude, GPT, and Gemini differ in prompt handling?