Sector-Specific AI Regulation
Horizontal AI regulation (the EU AI Act, NIST AI RMF) provides a general framework but cannot address the specific technical requirements, liability structures, and regulatory bodies of each sector. Healthcare, finance, employment, education, and law enforcement each have pre-existing regulatory regimes that AI must comply with in addition to — not instead of — horizontal AI regulation. Understanding the intersection of sector-specific rules with general AI frameworks is essential for deployers in regulated industries.
Healthcare and Medical AI
Healthcare AI is among the most heavily regulated contexts, because errors can directly cause physical harm. Key frameworks:
United States
- FDA Software as a Medical Device (SaMD): AI that analyses medical images, predicts patient deterioration, or supports clinical decisions is regulated as a medical device. Requires pre-market submission (510(k) or De Novo), evidence of clinical performance, and post-market surveillance. As of 2025, over 950 AI/ML-enabled medical devices have received FDA clearance.
- FDA Predetermined Change Control Plan (PCCP): Allows AI medical devices to make specified modifications without additional pre-market submission, if the change protocol is pre-approved. Critical for continuously learning AI systems.
- HIPAA: AI systems processing protected health information (PHI) require a Business Associate Agreement (BAA) with the vendor; strict data use limitations; breach notification obligations.
- ONC Health Data Interoperability Rules: AI clinical decision support must meet information blocking and interoperability requirements.
European Union
- EU AI Act classifies AI in medical device management and treatment decisions as high-risk (Annex III)
- Must comply with both EU AI Act requirements AND EU MDR/IVDR medical device regulations — the more specific requirement prevails where they conflict
- GDPR special category data (health data) requires explicit consent or specific legal basis; DPIAs mandatory
Financial Services
| Jurisdiction | Framework | Key AI obligations |
|---|---|---|
| US (Banking) | SR 11-7 Guidance on Model Risk Management (Fed Reserve + OCC) | All models used in decision-making must have formal validation by independent team; documentation of assumptions, inputs, and limitations; ongoing monitoring |
| US (Consumer Credit) | ECOA (Equal Credit Opportunity Act), FCRA, Fair Housing Act | AI credit scoring must not produce disparate impact on protected classes; adverse action notice required explaining credit decisions; right to explanation |
| EU | EU AI Act + MiFID II + EBA Guidelines on ICT Risk | AI used in credit scoring, insurance pricing, and fraud detection classified high-risk; EBA 2021 discussion paper on AI in financial services sets governance expectations |
| UK | FCA/PRA AI principles; Consumer Duty | Consumer Duty (2023) requires firms to demonstrate AI-assisted customer outcomes are fair; FCA has published discussion papers on AI governance expectations |
SR 11-7 is particularly significant: it predates the modern AI era but has been extended by regulators to cover all models including ML. It requires independent model validation — meaning the team that built a model cannot validate it. This has significant resourcing implications for AI-heavy financial institutions.
Employment and HR
AI used in hiring, promotion, performance management, and compensation decisions is subject to anti-discrimination law in most jurisdictions. AI does not exempt employers from existing obligations — it intensifies them because AI can apply biased criteria at massive scale.
- US EEOC guidance (2023): AI-based hiring tools must not produce disparate impact on race, colour, religion, sex, national origin, age, or disability. Employers are liable for discriminatory AI tools even if provided by a vendor — the "four-fifths rule" for adverse impact analysis applies to AI-assisted selection.
- NYC Local Law 144 (2023): Employers and employment agencies using AI "automated employment decision tools" must conduct annual bias audits by an independent auditor; must notify candidates before use; audit results must be publicly posted. First jurisdiction-specific employment AI audit requirement in force.
- EU AI Act: AI for recruitment, promotion, task allocation, and performance monitoring in employment contexts listed as high-risk in Annex III. Conformity assessment, registration, and worker information rights required.
- Illinois AI Video Interview Act (2020): Employers using AI to analyse video interviews must notify candidates, explain how AI is used, and obtain consent. One of the earliest AI-specific employment laws enacted.
Education
- FERPA (US): Family Educational Rights and Privacy Act. AI systems processing student educational records must comply with FERPA access, consent, and disclosure requirements. EdTech AI vendors must be FERPA-compliant before schools can use them with student data.
- COPPA (US): Children's Online Privacy Protection Act. AI tools used with students under 13 face strict consent and data handling requirements.
- EU AI Act: AI systems that determine access to educational institutions, evaluate learning outcomes, or assess students in ways that affect their educational paths are classified as high-risk.
- Proctoring AI scrutiny: AI-powered remote proctoring systems face particular regulatory and legal scrutiny in the EU (GDPR biometric data) and multiple US states for accuracy disparities across demographics.
- GenAI academic integrity: Multiple jurisdictions and institutions are developing policy frameworks for disclosure and acceptable use of generative AI in academic work — not yet codified as regulation but rapidly evolving.
Law Enforcement and Criminal Justice
This is the most contested area of AI regulation, with significant divergence between jurisdictions:
High-risk / prohibited uses (EU AI Act)
- Real-time remote biometric identification in public spaces — prohibited except for specific law enforcement exceptions (missing persons, imminent terrorist threat)
- Predictive policing systems that profile individuals as likely offenders based on personality or prior characteristics — prohibited
- AI in criminal risk assessment that informs judicial decisions — high-risk; full Annex III obligations apply
US context
- Facial recognition moratoriums in several cities (San Francisco, Boston, Baltimore) — not a federal ban
- COMPAS recidivism risk score contested in Loomis v. Wisconsin — courts have upheld AI risk score use but with disclosure requirements
- No federal law prohibiting real-time biometric surveillance — patchwork of state and municipal restrictions
Checklist: Do You Understand This?
- What FDA pathway does a clinical AI decision support tool typically follow in the US, and what is a PCCP?
- What does SR 11-7 require for AI models used in financial decision-making?
- Under NYC Local Law 144, what must an employer do before using an automated employment decision tool?
- How does the EU AI Act classify AI used in recruitment and performance management?
- What does the EU AI Act prohibit regarding AI in law enforcement that the US has not yet prohibited at the federal level?