🧠 All Things AI
Advanced

NIST AI RMF — How to Apply It

The NIST AI Risk Management Framework (AI RMF 1.0, published January 2023) is the most widely adopted voluntary framework for managing AI risk in the United States. Unlike the EU AI Act (which is law), the AI RMF is voluntary guidance. But it is referenced by US federal agencies, cited in procurement requirements, and used by organisations worldwide as a structured approach to responsible AI. The AI RMF complements the NIST Cybersecurity Framework (CSF) and can be mapped to ISO 42001.

Trustworthy AI: The Foundation

The AI RMF is organised around seven characteristics of trustworthy AI that an organisation should work toward. These characteristics drive the risk management activities across all four functions.

Valid & Reliable

Performs as intended across its intended deployment context; results are reproducible

Safe

Risks to people, systems, or the environment are understood and managed

Secure & Resilient

Withstands attack, misuse, and adverse conditions

Explainable & Interpretable

AI decisions and reasoning are understandable to relevant stakeholders

Privacy-Enhanced

Privacy values are incorporated throughout the AI lifecycle

Fair (Bias Managed)

Harmful bias is identified, assessed, and mitigated

Accountable & Transparent

Roles and responsibilities are clear; information is disclosed appropriately

The Four Core Functions

The AI RMF organises risk management activities into four functions that form a continuous, iterative cycle. They are not strictly sequential — Govern runs across all others continuously.

1
GOVERN

Establish the organisational culture, policies, processes, and accountability structures for AI risk management. Assign roles and responsibilities. This function underpins all others — without governance, the other three are inconsistent.

2
MAP

Identify and categorise risks associated with the AI system in its intended context. Understand the system's purpose, stakeholders, deployment environment, and the risks that matter most. Assess whether benefits justify risks.

3
MEASURE

Analyse, assess, and track identified AI risks using quantitative and qualitative methods. Develop metrics. Run evaluations. Measure model performance, bias, robustness, and drift across the AI lifecycle.

4
MANAGE

Prioritise, respond to, and monitor the risks identified and measured. Apply mitigations. Residual risks that cannot be eliminated must be documented and accepted by an accountable owner.

GOVERN in Practice

GOVERN is the least understood function but the most foundational. It answers: who is responsible, under what policies, with what processes, and how is AI risk integrated into enterprise risk management?

Key GOVERN activities

  • Establish an AI risk policy that covers acceptable use, prohibited use, and accountability
  • Define AI risk roles: AI risk owner, data steward, model owner, oversight committee
  • Create intake and review processes for new AI use cases (before development begins)
  • Integrate AI risk into existing ERM (Enterprise Risk Management) processes
  • Define training requirements for staff who develop, deploy, or use AI systems
  • Establish a feedback mechanism for surfacing AI-related concerns from any part of the organisation

MAP in Practice

MAP is about understanding the system in context before measuring risks. The most common failure is moving directly to development without MAP — leading to risk discovery after deployment.

Key MAP questions

  • Who are the direct users, indirect users, and those affected but not users (third parties)?
  • What is the intended use case — and what uses are out of scope or prohibited?
  • What is the deployment context: how autonomous, what human oversight, what stakes if wrong?
  • What are the relevant regulatory requirements (sector, geography, data type)?
  • What negative impacts could result from errors, misuse, or unexpected outputs?
  • Who are the marginalised or vulnerable populations that could be disproportionately harmed?

MEASURE in Practice

MEASURE is the function most closely linked to engineering work. It asks: how do we know whether the system is performing within acceptable risk parameters?

Technical measurement

Accuracy, precision, recall on representative test sets; fairness metrics across demographic groups; robustness to distribution shift; adversarial robustness; latency and reliability SLOs

Sociotechnical measurement

User trust and satisfaction; adoption and override rates; downstream outcome measurement (did the AI-assisted decisions lead to better results?); incident and complaint rates

MANAGE in Practice

MANAGE output: the risk register

  • Each identified risk → likelihood, impact, current controls, required controls, owner, timeline
  • Residual risks accepted by an accountable risk owner (not just "noted")
  • Monitoring schedule and triggers for re-assessment
  • Incident response plan for AI-specific failure modes
  • Decommissioning criteria: what performance degradation or risk threshold requires pulling the system?

The AI RMF Playbook

The NIST AI RMF Playbook (published alongside the framework) provides suggested actions for each function and category, organised by practice. It is available at airc.nist.gov as an interactive tool. Organisations use the Playbook to build implementation roadmaps — selecting which actions apply to their context, maturity level, and AI system type.

Profiles: Customising for Your Context

A Profile is an organisation-specific prioritisation of AI RMF categories based on context, goals, and risk tolerance. Two types:

Current Profile

The AI risk management outcomes you are achieving today. An honest assessment of current maturity — not aspirational.

Target Profile

The outcomes you want to achieve given your risk tolerance, mission, and regulatory requirements. Gap between current and target drives the roadmap.

Checklist: Do You Understand This?

  • Name the seven characteristics of trustworthy AI in the NIST AI RMF.
  • What are the four core functions and why does GOVERN underpin the other three?
  • What questions should the MAP function answer before development begins?
  • What is the difference between a current profile and a target profile?
  • What does MANAGE produce, and what must residual risk acceptance include?
  • How does the NIST AI RMF differ from the EU AI Act in terms of legal force and scope?