ISO 42001 — AI Management System Standard
ISO/IEC 42001:2023 is the first international standard specifically designed for AI management systems. Published in December 2023, it provides requirements for establishing, implementing, maintaining, and continuously improving an AI Management System (AIMS) within an organisation. Like ISO 27001 (information security) and ISO 9001 (quality), ISO 42001 is certifiable — organisations can be audited by accredited third parties and receive a certificate confirming compliance.
What ISO 42001 Covers
ISO 42001 applies to any organisation that develops, provides, or uses AI — regardless of size, sector, or geography. It is intended to be used alongside ISO 27001 and other management system standards. The standard does not specify technical requirements for AI models (that is the domain of technical standards like ISO 23053 or IEEE 2857) — it specifies the organisational management system that governs AI.
What it covers
- Organisational context and AI objectives
- Leadership commitment and AI governance roles
- AI risk assessment and treatment process
- AI system lifecycle management (design, develop, deploy, monitor, retire)
- Data governance for AI systems
- Supplier and third-party AI obligations
- Incident management and continual improvement
What it does NOT cover
- Specific technical requirements for model accuracy
- Model architecture or training method requirements
- Sector-specific compliance (it is a horizontal standard)
- Specific bias thresholds or fairness metrics (left to context)
ISO 42001 Structure (Annex SL / High Level Structure)
ISO 42001 uses the same High Level Structure (HLS, formerly Annex SL) as ISO 27001 and ISO 9001 — making it straightforward to integrate into an existing management system. The 10 clauses follow a Plan-Do-Check-Act (PDCA) cycle:
| Clause | Content |
|---|---|
| 4 — Context | Understanding the organisation, stakeholders, and the scope of the AIMS |
| 5 — Leadership | Top management commitment, AI policy, roles and responsibilities |
| 6 — Planning | AI risk assessment and treatment, AI objectives and planning to achieve them |
| 7 — Support | Resources, competence, awareness, communication, documented information |
| 8 — Operation | Operational planning and control, AI system impact assessment, AI system lifecycle |
| 9 — Performance evaluation | Monitoring, measurement, analysis, evaluation, internal audit, management review |
| 10 — Improvement | Nonconformity and corrective action, continual improvement |
AI-Specific Requirements in Clause 8
Clause 8 is where ISO 42001 diverges from generic management system standards. It introduces AI-specific operational requirements:
- AI System Impact Assessment (8.4): Before deployment, organisations must assess the potential impacts of the AI system on individuals, groups, and society — analogous to a DPIA (Data Protection Impact Assessment) but AI-focused.
- AI System Lifecycle (8.5): Requirements spanning design, development, testing, deployment, monitoring, and decommissioning. Each phase has documented inputs, activities, and outputs.
- Data for AI (8.6): Data governance specific to AI — data quality, representativeness, provenance, and documentation of training/test data.
- Information for AI systems (8.7): Documentation obligations for AI systems including intended use, limitations, and user guidance.
ISO 42001 vs NIST AI RMF
| Dimension | ISO 42001 | NIST AI RMF |
|---|---|---|
| Nature | Certifiable standard (3rd party audit) | Voluntary framework (no certification) |
| Origin | International (ISO/IEC) | US national (NIST) |
| Structure | High Level Structure (PDCA, management system) | Four functions (Govern, Map, Measure, Manage) |
| Prescriptiveness | More prescriptive (shall requirements) | More flexible (suggested actions, profiles) |
| Integration | Integrates with ISO 27001, ISO 9001 | Integrates with NIST CSF, SP 800-53 |
| Typical adopters | Enterprises seeking third-party validation; EU-regulated sectors | US federal agencies; US enterprise; global adopters |
The two are complementary, not competing. Many mature organisations adopt NIST AI RMF for internal risk management practice and ISO 42001 for external certification and supply chain assurance.
Certification Process
- Stage 1 audit (document review): Auditor reviews your documented AIMS — policy, risk assessment, procedures. Identifies gaps before Stage 2.
- Stage 2 audit (implementation audit): Auditor verifies that documented processes are actually implemented and effective. Interviews staff, reviews evidence.
- Certificate issued: Valid for 3 years with annual surveillance audits. Recertification audit at year 3.
- Nonconformities: Major nonconformity (certification blocked until resolved); minor nonconformity (resolved within agreed timeframe).
Checklist: Do You Understand This?
- What does ISO 42001 govern — what is its scope and what does it explicitly not cover?
- What is the High Level Structure and why does it make ISO 42001 easier to integrate with ISO 27001?
- What is an AI System Impact Assessment (Clause 8.4) and when must it be performed?
- Name three key differences between ISO 42001 and NIST AI RMF.
- What happens in Stage 1 vs Stage 2 of ISO 42001 certification?
- Why would an organisation pursue both NIST AI RMF adoption and ISO 42001 certification?