AI Compliance Checklist
This checklist covers the practical steps for teams deploying AI systems that may fall under the EU AI Act, US state laws, or sector-specific regulations. Use it as a starting framework — it does not substitute for legal counsel on your specific product and jurisdiction.
Step 1: Classify Your AI System
Before any other action, classify your system into the appropriate regulatory tier. The EU AI Act risk classification is the most structured framework to use as a starting point — even if you're primarily targeting US markets.
Check: Unacceptable risk (banned in EU from Feb 2025)
- Does your system perform social scoring for public authorities?
- Does it perform real-time biometric surveillance in public spaces?
- Does it use manipulative or subliminal techniques?
If yes to any: these uses are prohibited for EU markets. Stop.
Check: High-risk system (EU AI Act Annex III)
- Recruitment, CV screening, employment decisions, performance evaluation?
- Educational admissions, exam/assessment proctoring?
- Credit scoring, insurance risk assessment, benefits eligibility?
- Law enforcement risk profiling, predictive policing?
- Medical device AI, clinical decision support?
- Critical infrastructure management (water, energy, transport)?
If yes: full high-risk compliance required from August 2026.
Check: Limited risk (chatbots, deepfakes)
- Does your product use a conversational AI chatbot interface?
- Does it generate synthetic images/video/audio of real people?
If yes: transparency disclosure obligations apply.
Step 2: Determine Jurisdictions
| If you serve users in | Key regulation | Effective date |
|---|---|---|
| European Union | EU AI Act (risk-based) | GPAI: Aug 2025; High-risk: Aug 2026 |
| California | SB-53, AB 2013 | Jan 2026 |
| Texas | RAGA (Responsible AI Governance Act) | Jan 2026 |
| Colorado | Colorado AI Act | June 2026 |
| Illinois | AI in Employment Act | Jan 2026 |
| China | GPAI pre-approval, content labelling | In force |
| Regulated sector (healthcare, finance, etc.) | FDA, SEC, EEOC, CFPB sector rules | In force / ongoing |
Step 3: Document Your System
Required documentation regardless of tier (high-risk expands this significantly):
Minimum documentation baseline
- Intended purpose and scope of the AI system
- AI models used (including third-party models and providers)
- Training data characteristics (for fine-tuned or custom models)
- Performance metrics on representative test sets
- Known limitations and failure modes
- Human roles in the system (who reviews, approves, can override)
- Data flows: what data enters, what leaves, where processed
- Version history: when updated, what changed, impact assessment
Step 4: Implement Human Oversight
For high-risk systems (and good practice for any consequential AI):
- Override capability — A human can stop, pause, or reverse any AI decision that affects an individual significantly
- Confidence thresholds — Automated decisions only when confidence exceeds threshold; otherwise route to human review
- Explanation on request — Affected individuals can request an explanation of any AI-driven decision (required by GDPR Article 22)
- Audit trail — Log every significant decision with enough context to reconstruct what happened
Step 5: Incident Response
- GDPR data breach: 72-hour notification to supervisory authority; notify affected individuals if high risk
- EU AI Act serious incident: Providers of high-risk systems must report serious incidents to national authorities; GPAI providers with systemic risk have separate reporting obligations
- US sector rules: SEC material AI incidents may require disclosure; FDA AI device malfunctions have reporting pathways
- Internal playbook: Define what constitutes an AI incident, who is notified, who decides on response, and how the system is taken offline if needed
Step 6: High-Risk Conformity Assessment (EU, Aug 2026)
If your system is high-risk under the EU AI Act:
- Self-assessment against the technical requirements (most categories)
- Third-party assessment required for some biometric and law enforcement systems
- Issue a Declaration of Conformity
- Affix the CE marking to the system
- Register in the EU AI database (euaidb.eu)
- Appoint an EU representative if you are based outside the EU
Step 7: GPAI Obligations (if applicable)
If you are releasing a general-purpose AI model (not just using one):
- Technical documentation published for integrators
- Copyright compliance policy publicly available
- Summary of training data sources published
- For systemic risk models (>10²⁵ FLOPs): adversarial testing, incident reporting, cybersecurity measures
Ongoing Obligations
| Activity | Frequency | Who owns it |
|---|---|---|
| Post-market monitoring review | Quarterly (minimum) | AI product owner + compliance team |
| Model update impact assessment | On every model update | Engineering + compliance |
| Risk register review | Annual minimum; after incidents | Compliance lead |
| Bias audit | Annual (high-risk); biannual (others) | Data science + compliance |
| User disclosure updates | When AI system changes materially | Product + legal |
| Regulatory landscape review | Quarterly — landscape is fast-moving | Legal / compliance |
Using NIST AI RMF as a Safe Harbour
Texas RAGA and California SB-53 both provide safe harbour or reduced liability if you have implemented the NIST AI Risk Management Framework. Practical steps:
- Map your existing governance against the GOVERN / MAP / MEASURE / MANAGE functions
- Document gaps and create a remediation plan
- Implement and document the four functions formally
- Store evidence of implementation — auditors will ask for it
Compliance Readiness Self-Check
- Have you classified your system into the correct EU AI Act tier?
- Have you identified all jurisdictions where your users are located?
- Do you have minimum documentation covering intended purpose, models, and limitations?
- Is there a human override mechanism for consequential AI decisions?
- Do you have a written incident response plan for AI failures?
- Have you implemented or mapped against the NIST AI RMF?
- Is there a scheduled review cycle for ongoing compliance obligations?