AI Code Review
AI code review is most valuable as a first pass before human review — catching the obvious issues (missing null checks, SQL injection vectors, unhandled error paths) so that the human reviewer can focus on architecture, design, and intent. The key to useful AI review is structured, specific prompts that separate the code from the review criteria.
What AI Code Review Catches Well
AI review is reliable for
- Security issues: SQL injection, XSS, missing input validation, exposed secrets in code
- Common bug patterns: off-by-one errors, null dereferences, missing error handling
- Code style and consistency: naming conventions, unused variables, dead code
- Documentation gaps: functions with no docstrings, complex logic with no comments
- Test coverage gaps: untested edge cases, missing assertions
- API contract violations: wrong status codes, missing required fields in responses
Where human review is still essential
- Architectural fit — does this belong in this service/module at all?
- Business logic correctness — is this what we actually want to do?
- Cross-cutting concerns — performance implications across the whole system
- Team conventions and implicit standards not in the code itself
- Whether the feature was scoped correctly in the first place
Prompt Patterns for Code Review
1. Security Review
Review the following code for security vulnerabilities.
Focus specifically on:
- SQL injection and query injection risks
- Missing input validation (user-controlled data reaching dangerous operations)
- Authentication and authorisation bypass opportunities
- Sensitive data exposed in logs, responses, or error messages
- Hardcoded secrets or credentials
For each finding: state the line/function, explain the risk, and suggest a specific fix.
If no vulnerabilities found, say so explicitly — do not fabricate issues.
[PASTE CODE]
2. Logic and Bug Review
Review the following code for logic errors and missing edge case handling.
Context: [Describe what this code is supposed to do]
Check for:
- Null or undefined values that are not guarded
- Off-by-one errors in loops or array access
- Missing error handling for external calls (DB, APIs, file system)
- Race conditions or concurrency issues
- Cases where the function could return an unexpected value or type
Do not suggest stylistic improvements — focus on correctness only.
[PASTE CODE]
3. Structured Review with XML Tags (Most Reliable)
<context>
This is a [language] function in a [framework] application. It [what it does].
</context>
<criteria>
Review for: security vulnerabilities, unhandled errors, missing validation.
Do NOT suggest: refactoring, style changes, performance optimisations.
</criteria>
<code>
[PASTE CODE HERE]
</code>
XML tags prevent the AI from confusing review instructions with the code being reviewed. This structure is especially important for longer code samples.
Integrating AI Review into Your PR Workflow
Recommended workflow:
- Before opening the PR: Run AI security and logic review on your own diff. Fix obvious issues before requesting human review.
- In the PR description: Note what the AI review found and what you fixed — this signals to reviewers that a first pass was done.
- Automated CI review (optional): Tools like GitHub Copilot Review and CodeRabbit can run automatically on every PR and post inline comments. Use these for the consistent checks (style, patterns, obvious bugs).
- Human review focus: Ask human reviewers to focus on architecture, business logic, and team conventions — the things AI misses.
- After merge: If AI review consistently misses a class of issue, add it explicitly to your review prompt template.
AI Code Review Tools
| Tool | How it works | Best for |
|---|---|---|
| GitHub Copilot Review | Automatic inline comments on PRs; integrates with GitHub workflow | Teams already on Copilot; automated first-pass comments |
| CodeRabbit | AI reviewer that posts PR comments; configurable rules; supports most Git hosts | Teams wanting automated review without full Copilot subscription |
| Claude / ChatGPT (manual) | Paste code and prompt directly; most flexible for specific review criteria | Deep security audits; architectural review; one-off complex diffs |
| Cursor Chat | Review the open file with codebase context available | In-editor review while coding; catches issues before committing |
Limits to Know
AI review is not a security gate
Do not treat AI review as a substitute for dedicated security scanning. AI can be persuaded that its findings are false positives, and its reviews are inconsistent across runs. Use AI review for speed and breadth; use deterministic security scanners (SAST tools) for non-negotiable security checks. These are complementary, not substitutes.
Checklist: Do You Understand This?
- What five categories of issue is AI code review most reliable at finding?
- Why do XML tags improve the reliability of AI code review prompts?
- Write a security review prompt for a user registration endpoint that creates a new database record.
- What should human reviewers focus on that AI cannot reliably review?
- Why should AI review not replace dedicated SAST security scanners?
- What is one concrete way to use AI review results in your PR description?