FAQ vs Task vs Copilot Bots
Every chatbot built with an LLM falls into one of three fundamental patterns, each with a distinct architecture, a different relationship between user and system, and a different set of failure modes. Getting this choice wrong early is expensive β it shapes your data model, your memory strategy, your evaluation criteria, and your infrastructure.
The Three Patterns
Autonomy is a spectrum β choose the simplest pattern that solves your problem
| Dimension | FAQ Bot | Task Bot | Copilot Bot |
|---|---|---|---|
| Core purpose | Answer questions from a known knowledge base | Complete a specific action or workflow end-to-end | Augment a human working inside a tool or workflow |
| User interaction style | Question β Answer (single turn, often) | Goal β Multi-step dialogue β Action completed | Suggestion β Human decides β Accept / modify / reject |
| LLM's role | Retrieve + rephrase grounded content | Understand intent, collect parameters, call APIs | Generate, summarise, suggest β human in the loop |
| State required | Minimal β usually stateless | High β tracks collected slots, current step, completion | Context of open document / data / user session |
| Autonomy | None β read-only, no side effects | Medium β executes with user confirmation | Low β suggests, never commits without human |
| Failure mode | Hallucination / out-of-scope answers | Incomplete slot collection, wrong API call, partial action | User accepts bad suggestion blindly |
| Common examples | Support docs bot, policy Q&A, HR handbook | Restaurant booking, refund bot, onboarding wizard | GitHub Copilot, Notion AI, Excel Copilot, code review bot |
Pattern 1 β FAQ Bot (Knowledge Q&A)
The FAQ bot answers questions from a defined knowledge base. Users ask things; the bot retrieves and synthesises grounded answers. It has no side effects and takes no actions. The canonical implementation is RAG (Retrieval-Augmented Generation).
Architecture
FAQ bot = RAG: retrieve grounded content, synthesise, cite. Never answer from training knowledge alone.
When to choose FAQ Bot
Anti-patterns
Pattern 2 β Task Bot (Goal Completion)
The task bot helps a user accomplish a specific goal that involves multiple steps and ultimately calls an API or takes an action. It collects required information through dialogue (slot-filling), validates inputs, and executes β booking a restaurant, processing a refund, onboarding a new user.
Architecture
Never skip the confirmation gate before any irreversible action (payments, cancellations, sends)
When to choose Task Bot
Anti-patterns
Pattern 3 β Copilot Bot (Human Augmentation)
The copilot bot is embedded inside an existing tool and augments the human using it. Unlike the other two patterns, the copilot does not own a conversation β it assists within a context the human is already working in. It reads the current state (open document, code file, data row), generates suggestions, and the human decides whether to accept, modify, or ignore them.
Architecture
When to choose Copilot Bot
Anti-patterns
Hybrid Patterns
| Combination | Example | How they compose |
|---|---|---|
| FAQ + Task | Customer support bot (answers questions + can issue refunds) | Intent classifier routes: informational queries β FAQ pattern, action requests β Task pattern |
| Copilot + FAQ | Code assistant with embedded documentation lookup | Copilot generates suggestions; can pull from docs RAG as part of context |
| Task + Copilot | Sales workflow bot (copilot drafts email, task bot logs CRM entry) | Copilot handles content creation, Task pattern handles the write-back to external systems |
Choosing Your Pattern
Common Architecture Mistakes
Building an agent when you need a task bot
Agentic architectures are complex, expensive to evaluate, and harder to make reliable. If your use case has a bounded set of tasks with known APIs, a well-designed task bot is more predictable, cheaper to run, and easier to test.
Building a FAQ bot without grounding
A general-purpose LLM answering from its training knowledge is not a FAQ bot β it is an ungrounded chatbot. Without a retrieval layer and strict system prompt grounding, the bot will confidently answer from outdated or incorrect training data.
Designing a copilot that auto-commits
The moment a copilot takes an action without explicit human approval, it has crossed into agent territory. Keep copilots suggestion-only until you have built the trust, oversight mechanisms, and rollback capabilities required for autonomous action.
Mixing patterns in a single conversation thread
A bot that answers questions, takes actions, and provides inline suggestions simultaneously is difficult to evaluate and maintain. Start with one pattern; add a second only when the first is stable and you have explicit routing logic.
How Evaluation Differs by Type
| Bot type | Primary metric | Key test cases |
|---|---|---|
| FAQ Bot | Answer faithfulness to sources; out-of-scope detection rate | Questions whose answers are in docs; questions whose answers are not; ambiguous questions |
| Task Bot | Task completion rate; slot accuracy; wrong-action rate | Happy path; mid-flow corrections; invalid inputs; ambiguous intents; abort requests |
| Copilot Bot | Suggestion accept rate; error rate in accepted suggestions | Varied contexts; expert users; novice users; edge-case inputs; adversarial content |
Checklist: Do You Understand This?
- Can you describe the core purpose and LLM role for each of the three patterns?
- Can you explain when to choose a FAQ bot over a task bot, and vice versa?
- What is the defining architectural constraint that separates a copilot from an agent?
- Can you describe the slot-collection mechanism in a task bot and why a confirmation gate matters?
- What is the primary metric for evaluating each bot type?
- Can you name three anti-patterns β one per bot type?
- If someone asks for a βsupport bot that can answer product questions and process refundsβ, which combination of patterns do they need and how would you route between them?