🧠 All Things AI
Intermediate

AI in Healthcare

Healthcare is one of the highest-stakes domains for AI. The potential upside — earlier diagnoses, faster drug discovery, reduced clinician burnout — is enormous. So is the downside: wrong predictions at clinical scale cause real harm. This page maps the current AI landscape in healthcare, what's actually deployed, and the regulatory and compliance constraints every practitioner must understand.

Medical Imaging & Diagnostics

AI performs best where there is abundant labelled data and a clear ground truth — which describes radiology, pathology, and dermatology well. As of late 2025, the FDA had authorised over 1,250 AI-enabled medical devices, the majority in diagnostic imaging.

Radiology

AI flags candidate findings (nodules, fractures, bleeds) for radiologist review. Reduces read time and catches findings on busy overnight rosters. Tools: Aidoc, Viz.ai, Google's CXR model.

Pathology

Computational pathology scans whole-slide images for cancer grading. FDA-cleared tools such as Paige Prostate assist pathologists by highlighting regions of interest.

Dermatology

CNN-based classifiers trained on millions of dermoscopy images detect melanoma with sensitivity comparable to board-certified dermatologists in controlled studies.

Ophthalmology

IDx-DR (now EyeArt) was the first FDA-authorised autonomous diagnostic AI — it detects diabetic retinopathy without a clinician in the loop, deployed at point-of-care screening sites.

Clinical Decision Support

Clinical decision support (CDS) tools surface recommendations inside EHR workflows — drug-drug interaction alerts, sepsis early warning, deterioration risk scores. The 21st Century Cures Act distinguishes CDS tools that require a clinician to independently review recommendations (not regulated as devices) from tools that automate decisions (regulated as SaMD — Software as a Medical Device).

In practice, 52% of FDA-cleared AI devices are "assistive" (require clinician confirmation). This reflects regulatory reality, not technical limitation: full autonomy requires a higher evidentiary bar.

Key deployed CDS applications

  • Sepsis prediction (Epic's Sepsis Predictive Model, Dascena)
  • Deterioration scores (National Early Warning Score with ML refinement)
  • EHR-based patient risk stratification for readmission
  • AI-assisted medication reconciliation and dosing recommendations
  • Prior authorisation automation to reduce admin burden

Drug Discovery & Development

Drug discovery is where AI's value proposition is most dramatic. Traditional discovery takes 10–15 years and costs over $1 billion per approved drug. AI compresses several steps.

Protein Structure Prediction

AlphaFold 3 (DeepMind) predicts the 3D structure of proteins, nucleic acids, and small molecules. Over 200 million protein structures now publicly available — accelerating target identification.

Molecular Generation

Generative models (diffusion-based, graph neural networks) propose novel drug-like molecules with desired properties. Companies like Insilico Medicine and Recursion Pharmaceuticals use this approach.

Clinical Trial Optimization

AI matches patients to trials faster, predicts trial failure risk, and identifies patient subgroups likely to respond. Reduces enrolment time and improves success rates.

FDA Regulatory AI

FDA published 2025 draft guidance on using AI to support regulatory decision-making for drug and biological products. The CDER AI Council oversees AI use within drug evaluation.

EHR Data & Clinical Notes

EHRs generate enormous amounts of unstructured text — clinical notes, discharge summaries, referral letters. LLMs are being deployed for ambient documentation (AI listens to a consultation and writes the note), freeing clinicians from administrative burden estimated at 2+ hours per day.

Ambient Documentation

Microsoft DAX Copilot, Nuance, and Suki transcribe patient-clinician conversations into structured EHR notes. Early evidence suggests 7–10 minutes saved per consultation.

Patient Risk Stratification

Combining structured EHR data (labs, vitals, medications) with note text allows risk models to identify high-risk patients for proactive intervention before acute events.

Regulatory & Compliance Landscape

Healthcare AI operates under some of the strictest regulatory frameworks of any sector. Understanding these is non-negotiable for anyone building or deploying AI in this domain.

Key regulatory frameworks

  • FDA SaMD (US): AI that diagnoses, treats, or informs clinical decisions is a medical device. 510(k) clearance (predicate-based) or De Novo pathway required. FDA's January 2025 draft guidance requires lifecycle management and post-market monitoring for AI/ML devices.
  • EU MDR / AI Act: Medical AI is high-risk under both the Medical Devices Regulation and the EU AI Act. CE marking, conformity assessment, clinical evaluation, and post-market surveillance required.
  • HIPAA (US): PHI (Protected Health Information) may not be processed through third-party AI systems without a Business Associate Agreement. Sending patient data to a general-purpose LLM API violates HIPAA unless a BAA is in place.
  • GDPR (EU): Health data is "special category" data requiring explicit consent or a specific legal basis. Automated decision-making affecting patients requires human oversight provisions.

Risks & Failure Modes

Distribution Shift

A model trained on images from one hospital system may fail at another with different scanner settings, patient demographics, or imaging protocols. Validation on local data is mandatory.

Demographic Bias

Training datasets are historically skewed toward certain populations. A skin cancer model trained primarily on lighter skin tones underperforms on darker skin — with potentially life-threatening consequences.

Alert Fatigue

CDS systems that fire too many alerts (including false positives) train clinicians to ignore them — the classic "boy who cried wolf" failure pattern. Precision matters as much as recall.

LLM Hallucinations in Clinical Contexts

General-purpose LLMs used for clinical note-writing or patient Q&A can hallucinate drug names, dosages, or diagnoses. Medical AI must use grounded, verified sources — not open-ended generation alone.

Checklist: Do You Understand This?

  • Can you name three AI applications already cleared by the FDA as medical devices?
  • What is the difference between a CDS tool that is regulated as a medical device and one that is not?
  • Why does HIPAA compliance change significantly when you introduce a third-party LLM API?
  • What is distribution shift, and why is it especially dangerous in medical imaging AI?
  • How does AlphaFold accelerate drug discovery, and at which step of the pipeline?
  • What is an ambient documentation tool, and what clinical problem does it solve?