🧠 All Things AI
Beginner

AI-Assisted Debugging

The most powerful debugging technique with AI is not asking it to fix the bug — it is using it as a reasoning partner to help you understand what is happening. Forcing yourself to explain a bug clearly to an AI (the "AI rubber duck" method) often reveals the problem before the AI even responds. When that fails, structured prompts help AI diagnose issues it genuinely can help with.

The AI Rubber Duck Method

Traditional rubber duck debugging works by explaining your code to an inanimate object — the act of articulation forces clarity and often reveals the bug. AI is a better duck because it can ask follow-up questions and suggest hypotheses.

The rubber duck prompt:

I am debugging a problem and need to think it through. Please listen and ask questions.

What I expect to happen: [DESCRIBE EXPECTED BEHAVIOUR]

What actually happens: [DESCRIBE ACTUAL BEHAVIOUR]

When it happens: [CONDITIONS — always / intermittently / only in production / only on specific inputs]

What I have already tried: [LIST YOUR ATTEMPTS]

My current hypothesis: [YOUR BEST GUESS AT THE CAUSE]

Do not suggest a fix yet. Ask me a question that might help narrow down the cause.

The "do not suggest a fix yet" instruction keeps the AI in diagnostic mode rather than jumping to solutions that may miss the root cause.

Structured Debug Prompts

1. Stack Trace Analysis

Explain this error and stack trace. Tell me:

1. What the error means in plain language

2. Which line in my code is most likely causing it (from the stack trace)

3. The 2-3 most common causes of this specific error

4. What information I would need to diagnose it further

Stack trace:

[PASTE FULL STACK TRACE]

Relevant code (the function named in the stack trace):

[PASTE CODE]

2. Unexpected Output Analysis

This function returns the wrong value for the input below. Help me find why.

Function: [PASTE FUNCTION]

Input: [PASTE INPUT VALUE]

Expected output: [WHAT YOU EXPECTED]

Actual output: [WHAT YOU GOT]

Walk through the function step-by-step with this specific input and identify where the value diverges from what I expect.

Asking the AI to step through the execution trace forces it to reason carefully rather than pattern-match to a common fix that may not apply.

3. Intermittent Bug Analysis

I have a bug that occurs intermittently. It does not reproduce every time.

Frequency: [e.g., about 1 in 20 requests / only under load / happens after the system has been running for a while]

Symptoms: [what you observe when it occurs]

Environment: [production / staging / only on certain servers]

List the classes of bugs that cause intermittent failures in [language/framework] code. For each class, describe how I would confirm or rule it out as the cause.

Intermittent bugs often have systematic causes (race conditions, resource exhaustion, external dependency timeouts). This prompt surfaces a checklist to work through.

When AI Debugging Helps vs When It Misleads

AI debugging is reliable when

  • The error is a well-known pattern with a readable stack trace
  • You can provide the exact failing input and actual vs expected output
  • The bug is within a single function or small code block
  • You're debugging an error message you've never seen before and need it explained
  • You want a list of hypotheses to test, not a definitive answer

AI debugging misleads when

  • The bug is in external state (database values, environment variables, external API responses) that AI cannot see
  • You describe the bug vaguely and AI generates a plausible-but-wrong hypothesis confidently
  • The bug is a timing/concurrency issue that requires observing actual runtime behaviour
  • AI keeps suggesting fixes without you verifying its assumptions about your code structure

Debugging Workflow with AI

  1. Reproduce first. If you cannot reproduce the bug reliably, do not ask AI to fix it — ask AI to help you understand what conditions might cause it and how to reproduce it.
  2. Gather evidence. Collect the full error message, stack trace, the relevant input, and the actual vs expected output before prompting.
  3. Use AI for explanation, not just fixes. Ask "why does this happen?" before "how do I fix it?" — understanding the cause often reveals the right fix.
  4. Verify AI assumptions. Before applying a fix, confirm that AI's assumptions about your code are correct. AI often assumes standard library behaviour that your version doesn't have, or guesses the shape of your data incorrectly.
  5. Test the fix, not just that the error goes away. Ensure the fix addresses the root cause and doesn't just suppress the symptom.
  6. If two AI attempts fail, add logging. Add print/log statements to confirm your assumptions about what values exist at runtime, then return to AI with actual data rather than guesses.

Checklist: Do You Understand This?

  • What is the AI rubber duck method and why does the instruction "do not suggest a fix yet" improve results?
  • What five pieces of information should you always include in a stack trace analysis prompt?
  • Why is asking AI to step through a function with a specific input more reliable than asking it to find the bug?
  • Name two situations where AI debugging will mislead you, and how you can avoid those situations.
  • What should you do if two AI debugging attempts have not found the bug?
  • Why is reproducing the bug a prerequisite before asking AI to help fix it?