🧠 All Things AI
Advanced

AI Compliance Checklist

This checklist covers the practical steps for teams deploying AI systems that may fall under the EU AI Act, US state laws, or sector-specific regulations. Use it as a starting framework — it does not substitute for legal counsel on your specific product and jurisdiction.

Step 1: Classify Your AI System

Before any other action, classify your system into the appropriate regulatory tier. The EU AI Act risk classification is the most structured framework to use as a starting point — even if you're primarily targeting US markets.

Check: Unacceptable risk (banned in EU from Feb 2025)

  • Does your system perform social scoring for public authorities?
  • Does it perform real-time biometric surveillance in public spaces?
  • Does it use manipulative or subliminal techniques?

If yes to any: these uses are prohibited for EU markets. Stop.

Check: High-risk system (EU AI Act Annex III)

  • Recruitment, CV screening, employment decisions, performance evaluation?
  • Educational admissions, exam/assessment proctoring?
  • Credit scoring, insurance risk assessment, benefits eligibility?
  • Law enforcement risk profiling, predictive policing?
  • Medical device AI, clinical decision support?
  • Critical infrastructure management (water, energy, transport)?

If yes: full high-risk compliance required from August 2026.

Check: Limited risk (chatbots, deepfakes)

  • Does your product use a conversational AI chatbot interface?
  • Does it generate synthetic images/video/audio of real people?

If yes: transparency disclosure obligations apply.

Step 2: Determine Jurisdictions

If you serve users inKey regulationEffective date
European UnionEU AI Act (risk-based)GPAI: Aug 2025; High-risk: Aug 2026
CaliforniaSB-53, AB 2013Jan 2026
TexasRAGA (Responsible AI Governance Act)Jan 2026
ColoradoColorado AI ActJune 2026
IllinoisAI in Employment ActJan 2026
ChinaGPAI pre-approval, content labellingIn force
Regulated sector (healthcare, finance, etc.)FDA, SEC, EEOC, CFPB sector rulesIn force / ongoing

Step 3: Document Your System

Required documentation regardless of tier (high-risk expands this significantly):

Minimum documentation baseline

  • Intended purpose and scope of the AI system
  • AI models used (including third-party models and providers)
  • Training data characteristics (for fine-tuned or custom models)
  • Performance metrics on representative test sets
  • Known limitations and failure modes
  • Human roles in the system (who reviews, approves, can override)
  • Data flows: what data enters, what leaves, where processed
  • Version history: when updated, what changed, impact assessment

Step 4: Implement Human Oversight

For high-risk systems (and good practice for any consequential AI):

  • Override capability — A human can stop, pause, or reverse any AI decision that affects an individual significantly
  • Confidence thresholds — Automated decisions only when confidence exceeds threshold; otherwise route to human review
  • Explanation on request — Affected individuals can request an explanation of any AI-driven decision (required by GDPR Article 22)
  • Audit trail — Log every significant decision with enough context to reconstruct what happened

Step 5: Incident Response

  • GDPR data breach: 72-hour notification to supervisory authority; notify affected individuals if high risk
  • EU AI Act serious incident: Providers of high-risk systems must report serious incidents to national authorities; GPAI providers with systemic risk have separate reporting obligations
  • US sector rules: SEC material AI incidents may require disclosure; FDA AI device malfunctions have reporting pathways
  • Internal playbook: Define what constitutes an AI incident, who is notified, who decides on response, and how the system is taken offline if needed

Step 6: High-Risk Conformity Assessment (EU, Aug 2026)

If your system is high-risk under the EU AI Act:

  1. Self-assessment against the technical requirements (most categories)
  2. Third-party assessment required for some biometric and law enforcement systems
  3. Issue a Declaration of Conformity
  4. Affix the CE marking to the system
  5. Register in the EU AI database (euaidb.eu)
  6. Appoint an EU representative if you are based outside the EU

Step 7: GPAI Obligations (if applicable)

If you are releasing a general-purpose AI model (not just using one):

  • Technical documentation published for integrators
  • Copyright compliance policy publicly available
  • Summary of training data sources published
  • For systemic risk models (>10²⁵ FLOPs): adversarial testing, incident reporting, cybersecurity measures

Ongoing Obligations

ActivityFrequencyWho owns it
Post-market monitoring reviewQuarterly (minimum)AI product owner + compliance team
Model update impact assessmentOn every model updateEngineering + compliance
Risk register reviewAnnual minimum; after incidentsCompliance lead
Bias auditAnnual (high-risk); biannual (others)Data science + compliance
User disclosure updatesWhen AI system changes materiallyProduct + legal
Regulatory landscape reviewQuarterly — landscape is fast-movingLegal / compliance

Using NIST AI RMF as a Safe Harbour

Texas RAGA and California SB-53 both provide safe harbour or reduced liability if you have implemented the NIST AI Risk Management Framework. Practical steps:

  1. Map your existing governance against the GOVERN / MAP / MEASURE / MANAGE functions
  2. Document gaps and create a remediation plan
  3. Implement and document the four functions formally
  4. Store evidence of implementation — auditors will ask for it

Compliance Readiness Self-Check

  • Have you classified your system into the correct EU AI Act tier?
  • Have you identified all jurisdictions where your users are located?
  • Do you have minimum documentation covering intended purpose, models, and limitations?
  • Is there a human override mechanism for consequential AI decisions?
  • Do you have a written incident response plan for AI failures?
  • Have you implemented or mapped against the NIST AI RMF?
  • Is there a scheduled review cycle for ongoing compliance obligations?