What NOT to Do with AI
Most AI mistakes are not caused by the technology — they are caused by how people use it. These are the most common traps, why each one causes problems, and what to do instead.
1. Blind Trust in AI Outputs
What goes wrong
AI produces fluent, confident text that can be completely wrong. There is no error code for a hallucinated fact — the response looks identical to a correct one. Publishing AI output without reading it has led to fake legal citations, incorrect statistics in published reports, and fabricated quotes attributed to real people.
Safer alternative
Read every AI output before using it. For factual claims, verify against a primary source. For code, run it. For citations, check they exist and say what AI claims. The rule: AI writes the draft, you own the verification.
2. Leaking Confidential Data
What counts as confidential
Unreleased product plans, customer personal data (names, emails, health records), non-public financial data, internal legal matters, proprietary source code, employee performance information, passwords and API keys. When you paste any of this into a consumer AI tool (ChatGPT, Gemini, Claude.ai), it is transmitted to and processed on external servers.
Common mistakes
- Pasting a customer support ticket (contains PII) into ChatGPT to draft a reply
- Uploading a confidential contract to ask AI to summarise it
- Sharing unreleased financial projections to prepare a deck
- Including a real API key in a code snippet pasted for debugging help
Safer approach
- Check your employer's AI acceptable use policy before pasting anything work-related
- Anonymise data first: replace names with [CUSTOMER], [COMPANY]
- Use enterprise AI tools with a signed data processing agreement for sensitive work
- Never paste credentials or secrets — redact them first
3. Sloppy Prompts
Vague instructions produce vague outputs. Asking AI to "write something about marketing" gives you something about marketing — generic, unfocused, and probably not what you needed. The output is only as good as the specification.
Vague prompt
Result: generic product email with no specifics, no clear action, reads like every other AI email.
Specific prompt
Specificity checklist: Who is the audience? What format and length? What is the specific goal? What constraints apply? What should NOT be included?
4. Using AI for Tasks It Fails At
| Task type | Why AI fails | Better approach |
|---|---|---|
| Current events | Training data has a knowledge cutoff; AI describes outdated information as current with confidence | Search current sources; use AI to summarise what you find |
| Precise arithmetic | LLMs predict tokens — they do not calculate; multi-step arithmetic errors are common | Use a calculator or spreadsheet; have AI write the formula, not compute the answer |
| Legal, medical, financial advice | No professional licence, no accountability, no knowledge of your specific situation or jurisdiction | Use AI to research and prepare questions; consult a qualified professional for decisions |
| Real-world verification | "Is this shop open right now?" — AI cannot check the real world; it guesses from training data | Use appropriate real-world tools: maps, websites, phone calls |
5. Treating AI as an Authority
AI has no expertise. It has pattern-matched on text written by experts, but it does not understand what it says, cannot verify its own claims, and has no accountability for being wrong.
- Sycophancy: if you tell AI it is wrong (even when you are the one who is wrong), it often agrees with you and reverses its answer to please you. Do not use agreement as a signal of correctness.
- Confident hallucination: AI gives wrong answers with the same tone and confidence as correct ones. Confident language is not evidence of accuracy.
6. Over-Relying on AI for Decisions
AI is a first-draft tool, not a decision-maker. For consequential decisions — a personnel matter, a significant financial choice, a legal position — AI can help you research and structure your thinking, but the decision belongs to a human who bears responsibility for the outcome.
The accountability test
If something goes wrong with this decision, who is responsible? If the answer is "AI", you have outsourced accountability to a system that cannot be held accountable. Decisions with real consequences require a human who accepts responsibility.
Anti-Pattern Quick Reference
| What people do | What goes wrong | Safer alternative |
|---|---|---|
| Copy-paste AI output directly into a report | Hallucinated facts, wrong statistics published | Read it, verify claims, edit before publishing |
| Paste a customer email into public AI tool | Customer PII sent to third-party server | Anonymise first or use enterprise tool with DPA |
| Ask AI for today's stock price or live data | Gets confident wrong answer from stale training data | Use a financial data source; AI cannot know current prices |
| Send vague prompt and accept first response | Generic output requiring significant rework | Specify audience, format, length, constraints upfront |
| Ask AI to verify its own output | Circular self-confirmation — not independent verification | Verify against an independent external source |
| Use AI to make a hiring or personnel decision | Training data biases amplified; no accountability; legal exposure | AI supports screening; human makes the final call |
Checklist: Do You Understand This?
- Name three types of data you should not paste into a consumer AI tool without anonymising first.
- Why does confident AI language not indicate a correct answer?
- What is sycophancy in AI — and why is it a problem for verification?
- Name two task types where AI should never be the final source of truth.
- What is the accountability test, and when should you apply it?
- Rewrite this prompt to be more specific: "Help me write something about our product launch."