🧠 All Things AI
Intermediate

AI in Legal

Legal AI adoption more than doubled in a single year — from 23% of corporate legal departments in 2024 to 52% in 2025. The market for legal AI tools grew from $1.5 billion to over $3 billion in the same period. But the legal industry's exposure to AI failure is severe: citing a hallucinated case in a court filing is a sanctionable offence. This page maps where AI delivers genuine value in law and where the risks demand human oversight.

Contract Review & Drafting

Contract review was one of the first legal tasks to be automated with AI, and remains the most mature application. AI systems extract clauses, flag deviations from standard positions, and flag missing provisions — a task that previously required junior lawyers to read every page manually.

Clause Extraction

Tools like Kira (Litera) and Luminance identify and extract over 1,000 clause types — limitation of liability, indemnification, governing law — across large document sets with 60–90% time savings claimed.

Contract Lifecycle Management

AI integration into CLM platforms (Ironclad, Icertis, DocuSign CLM) reduces contract cycle times by up to 40% by automating approval routing, obligation extraction, and renewal alerts.

Playbook Comparison

Lawyers define standard positions (a "playbook") and AI flags deviations. Acceptable deviations can be auto-approved; escalated deviations go to counsel. Speeds negotiation without reducing legal judgment.

First-Draft Drafting

LLM-based tools generate first drafts of standard agreements (NDAs, service agreements) from a brief. Lawyers review and refine — reducing drafting time but not eliminating the review obligation.

Legal Research

Case research is time-intensive: finding relevant precedent across thousands of cases, statutes, and secondary sources. AI research tools index legal databases and surface relevant authority in natural language.

Major platforms (2025)

  • Westlaw AI-Assisted Research (Thomson Reuters): Natural language queries over case law and statutory databases. Generates summaries with citations — but required verification shows 33% hallucination rate in independent testing.
  • Lexis+ AI (LexisNexis): Research assistant with access to Lexis content. Tested at 17% hallucination rate — better, but still not trust-without-verify.
  • Harvey AI: LLM trained on legal data, deployed at major firms including A&O Shearman and PwC Legal. Used for memo drafting, due diligence, and research tasks.
  • Casetext (acquired by Thomson Reuters): Co-Counsel assistant for document analysis and research, integrated into legal workflows.

The Hallucination Problem in Law

Hallucinations — AI generating plausible but false information — are uniquely dangerous in legal practice. Cases, statutes, and citations must be real, accurately attributed, and say exactly what the AI claims they say. AI-generated fake citations in court filings have led to sanctions, bar disciplinary proceedings, and significant reputational damage.

Hallucination evidence from research (2025)

  • Stanford HAI testing: AI legal models hallucinated in 1 out of 6 queries or more
  • Lexis+ AI: 17% hallucination rate in independent benchmarking
  • Westlaw AI: 33% hallucination rate in independent testing
  • Thomson Reuters Ask Practical Law AI: accurate only 18% of the time in one assessment
  • Over 600 AI hallucination cases on record, implicating 128 lawyers including those from top-tier firms

Required response: human-in-the-loop verification

The American Bar Association's 2024 ethics guidance establishes that lawyers must have a reasonable understanding of AI capabilities and limitations, and must verify all AI-generated output. Courts sanction counsel regardless of which tool was used or what the vendor claimed. Verification is not optional — it is a professional obligation.

Compliance & eDiscovery

Document review in litigation and regulatory investigations is one of the oldest and highest-value AI applications in law. Technology-Assisted Review (TAR / predictive coding) has been court-accepted since 2012.

Predictive Coding (TAR)

Lawyers code a seed set of documents; AI ranks the remaining corpus by predicted relevance. Reduces review volume from millions to tens of thousands of documents. Court-accepted methodology since 2012.

Regulatory Compliance Monitoring

AI scans communications (email, Teams, Slack) for policy violations, insider trading signals, or regulatory triggers. Used by financial services compliance teams to surface patterns requiring review.

Sanctions & KYC Screening

AI matches entity names against sanctions lists and adverse media in real time. Replaces rule-based string matching with NLP that handles name variations, transliterations, and aliases.

Privilege Review

AI classifies documents as potentially privileged for human attorney review, reducing time spent on attorney-client privilege determinations in large discovery sets.

Risks & Limits of Legal AI

Fabricated Citations

LLMs invent plausible-sounding case names, docket numbers, and holdings. In Mata v. Avianca (SDNY 2023), AI hallucinations led to a court sanctions order. More cases followed, and the pattern continues in 2025.

Confidentiality Breach Risk

Uploading client documents to a general-purpose AI tool may breach attorney-client privilege and client confidentiality obligations. Enterprise-grade tools with data processing agreements are required.

Jurisdiction Blindness

AI trained on US case law may apply US legal reasoning to UK, EU, or other jurisdictions incorrectly. Legal AI must be jurisdiction-aware, and output verified by local counsel.

Unauthorised Practice of Law

AI tools marketed to consumers as "legal advice" may constitute unauthorised practice of law in many jurisdictions. There is active regulatory debate about where AI assistance ends and legal practice begins.

Checklist: Do You Understand This?

  • What is TAR (Technology-Assisted Review) and why is it court-accepted?
  • Why is an AI hallucination in a court filing more serious than in a marketing email?
  • What verification obligations do lawyers have under the ABA's 2024 ethics guidance?
  • Name two enterprise legal AI platforms and their core use cases.
  • What is the difference between a contract playbook and a CLM system, and where does AI assist in each?
  • Why does uploading client documents to a general-purpose AI tool create a professional responsibility risk?