AI Pair Programming
AI pair programming is not about replacing a human partner — it is about having an always-available, tireless collaborator that is excellent at certain tasks and unreliable at others. The engineers who get the most from AI coding tools have a clear mental model of what each tool does well, how to direct it effectively, and when to stop listening to it. This page gives you that model.
Understanding the Tool Roles
The three dominant AI coding tools serve different roles. They are not interchangeable — they layer.
| Tool | Role | Best for | Weakness |
|---|---|---|---|
| GitHub Copilot | Inline autocomplete + tab completions | Fast single-line and small block completions while typing; boilerplate | Myopic — only sees the current file; no multi-file reasoning |
| Cursor | In-editor AI with codebase context | In-file edits with surrounding context; chat about the open file; quick targeted changes | Agent mode is improving but less autonomous than Claude Code |
| Claude Code | Autonomous multi-file agent | Multi-file implementations; large refactors; understanding the whole codebase; complex problem-solving | Slower than inline tools; overkill for simple completions |
The layering model
Use Copilot for fast suggestions while typing. Use Cursor for in-editor conversation about the code in front of you. Use Claude Code for tasks that span multiple files or require understanding the whole codebase. These are additive, not competing choices.
Directing Your AI Partner Effectively
The biggest productivity gap between engineers using AI well and using it poorly is how they communicate the task. Vague requests get vague code.
Prompts that work
- Give context first: "This is a Next.js 16 app using TypeScript and Prisma. The database schema is: [paste schema]."
- Specify constraints: "Do not add new dependencies. Use the existing
fetchUserfunction defined inlib/api.ts." - Describe the goal, not just the action: "The endpoint should return 404 if the user is not found, not throw an unhandled error."
- Reference exact file/function names: keeps the AI grounded in your actual codebase, not an imagined one
Prompts that fail
- "Write a login function" — no framework, no auth method, no error handling specified
- "Fix the bug" — no error message, no context, no expected vs actual behaviour
- "Make it better" — AI will add comments, rename variables, and change things you didn't ask for
- Continuing to refine bad AI output without resetting — if the approach is wrong, restart with better context
The AI Pair Programming Workflow
- Orient the AI at session start. For multi-file tasks in Claude Code: open with a summary of what the codebase does, the stack, and the specific task. For Cursor: open the relevant files before asking.
- Break the task into steps. "First, write the database query function. Don't write the API route yet." Smaller scopes produce better outputs.
- Review before accepting. Read every diff. Copilot tab-completions are fast but can introduce subtle errors. Treat AI code like code from a junior engineer — helpful but unverified.
- Test immediately. Don't batch up AI-generated code without testing. Run your test suite or manually exercise the path after each meaningful chunk.
- Give specific feedback. "The query is correct but it's not handling the case where user is null — add a guard" is better than "fix the error handling."
- Know when to take back control. If the AI is going in circles after two correction attempts, step away, solve the core logic yourself, then ask AI to fill in the boilerplate around your solution.
What AI Pair Programming Does Best
Where AI dramatically speeds you up
- Boilerplate: CRUD endpoints, form handlers, data transformations
- Test generation: writing unit tests for functions you've already written
- Regex, date formatting, string manipulation — one-liner tasks
- Translating between languages or frameworks you know less well
- Explaining unfamiliar code: "What does this function do?"
- Writing first-draft documentation and JSDoc comments
Where you must stay in charge
- Architecture decisions — AI will generate something that works, not necessarily something that's right for your system
- Security-sensitive code — always review auth, input validation, and secret handling
- Performance-critical paths — AI optimises for correctness, not performance
- Business logic with non-obvious rules — AI does not know your domain the way you do
- Database migrations — an AI-generated migration that drops a column can be catastrophic
Organisational Factors That Matter
The DORA 2025 research identified that individual tool use is less important than organisational practices. Teams that get the most from AI coding tools share these traits:
- Clear stance on which AI tools are approved — engineers aren't making their own security decisions
- Strong version control discipline — AI-generated code is reviewed in PRs like any other code
- Small, frequent commits — easy to isolate where an AI error was introduced
- Good internal documentation — AI tools are only as good as the context they can read
- A culture of reviewing AI output, not rubber-stamping it
Checklist: Do You Understand This?
- What is the layering model for Copilot, Cursor, and Claude Code — when do you use each?
- What four pieces of context should you give an AI before asking it to write multi-file code?
- Why is "make it better" a bad prompt — and what should you say instead?
- After how many failed correction attempts should you stop directing AI and solve the core logic yourself?
- Name three categories of work where AI pair programming speeds you up most, and two where you should stay in charge.
- According to the DORA 2025 research, what organisational factor matters more than which individual AI tool is used?