🧠 All Things AI
Advanced

US AI Policy & NIST Framework

Unlike the EU, the US has no single federal AI law. Instead, a patchwork of Executive Orders, agency guidance, sector-specific rules, and increasingly bold state laws governs AI. In 2025–2026, the central question is whether federal policy will preempt state laws — actively contested and unresolved as of early 2026.

Executive Orders: A Tale of Two Presidents

Biden EO on Safe AI (October 2023)

President Biden's Executive Order on Safe, Secure, and Trustworthy AI directed:

  • NIST to develop standards for AI safety testing
  • Developers of large dual-use AI models to report safety evaluations to the government
  • Agency-by-agency AI deployment guidance across federal departments
  • DHS risk assessment for AI threats to critical infrastructure
  • Requirements for watermarking AI-generated content (research direction)

The Biden EO was the first comprehensive federal executive action on AI in the US but was largely revoked by the incoming administration in January 2025.

Trump EOs on AI (January 2025 + December 2025)

On day one of the Trump administration (January 2025), an executive order revoked the Biden AI EO and reoriented federal AI policy around:

  • Maintaining US AI dominance — removing "excessive regulations" that could disadvantage US AI companies vs China
  • Pro-innovation posture — federal policy should not burden AI development without clear public benefit
  • Federal AI Action Plan — directed agencies to produce a coordinated US AI strategy

In December 2025, a second major EO — "Ensuring a National Policy Framework for Artificial Intelligence" — established the federal preemption strategy:

  • Directed the DOJ to establish an AI Litigation Task Force to challenge state AI laws inconsistent with federal policy
  • Directed the FTC to issue a policy statement on how the FTC Act applies to AI and could preempt state laws
  • Directed Commerce to evaluate state AI laws within 90 days
  • Directed the FCC to consider federal AI disclosure standards
  • Exempted from preemption: children's safety laws, AI data centre infrastructure laws, and state AI procurement laws

NIST AI Risk Management Framework (AI RMF)

The NIST AI RMF (released January 2023, with ongoing updates) is the dominant voluntary framework for US AI governance. It is widely adopted in regulated industries and referenced in state laws (Texas, California) as a safe harbour standard.

The framework organises AI risk management into four core functions:

GOVERN

Organisational policies, accountability structures, and culture for AI risk management. Who owns risk decisions? What policies govern AI use? How are lessons learned and fed back?

MAP

Identify and categorise AI risks in context. What harms could result from this AI system? Who is affected? What is the probability and severity?

MEASURE

Quantify and evaluate identified risks. Establish metrics, benchmarks, and evaluation protocols. Test for accuracy, bias, robustness, and security.

MANAGE

Treat identified and measured risks: mitigate, accept, transfer, or avoid. Monitor in production. Respond to incidents. Update policies as AI evolves.

Why NIST RMF matters in 2026

Texas RAGA and California laws offer a rebuttable presumption of compliance or safe harbour to companies that have implemented a recognised framework like NIST AI RMF or ISO 42001. Even as federal preemption is contested, having NIST RMF implementation documented is practical insurance.

Key State AI Laws (2025–2026)

LawStateStatusKey provisions
SB-53CaliforniaSigned Sep 2025; effective Jan 2026Frontier AI labs must publish safety policies; incident reporting; AI safety research exemptions
AB 2013CaliforniaSigned 2024; effective Jan 2026Training data disclosure requirements for AI developers
AB 2885CaliforniaSigned 2024Standardised AI definition for California law
RAGA (Responsible AI Governance Act)TexasEnacted 2025; effective Jan 2026High-risk AI obligations; impact assessments; NIST RMF safe harbour
Colorado AI ActColoradoSigned 2024; effective June 2026Developer/deployer obligations for high-risk AI; bias risk management; disclosure to affected individuals
Illinois AI Use in Employment ActIllinoisEffective Jan 2026Employers must disclose AI use in hiring; bias audit requirements

Federal vs State: The 2025–2026 Battle

The December 2025 Trump EO set up a direct conflict with state AI laws. The key legal question: can federal executive policy preempt state AI laws without Congressional action?

Key dynamics:

  • State laws currently remain in force — until courts rule otherwise or Congress passes preemptive federal legislation, state laws like California SB-53 and Texas RAGA are enforceable
  • DOJ AI Litigation Task Force — established to challenge state laws; first legal challenges expected mid-2026
  • No federal comprehensive AI law — Congress has not passed a federal AI Act equivalent; sector-specific rules fill the gap
  • Children's safety exception — Federal preemption explicitly does not extend to state children's AI safety laws (e.g. school AI restrictions)

Sector-Specific Federal AI Rules

In the absence of general AI legislation, sector regulators have issued AI-specific guidance:

  • FDA — AI in medical devices requires premarket clearance; Software as a Medical Device (SaMD) guidance; predetermined change control plans
  • SEC — Disclosure rules requiring companies to describe material AI risks in filings; enforcement actions against AI-washing
  • EEOC — Guidance on AI in employment decisions and disparate impact under Title VII
  • CFPB — Guidance on AI in consumer credit decisions; adverse action notice requirements even for algorithmic decisions
  • FTC — Enforcement against deceptive AI claims; "AI washing" fraud investigations; draft AI guidance

Practical Implications for Builders

  • Multi-state complexity is real: If you serve US customers, you may face California, Texas, Colorado, and Illinois requirements simultaneously in 2026 — all with different scopes and definitions.
  • Document compliance with NIST AI RMF: Multiple state safe harbours reference it; it demonstrates due diligence regardless of which laws ultimately prevail.
  • Watch the preemption cases closely: If federal preemption succeeds, the multi-state compliance burden collapses. If it fails, state laws multiply and harmonisation becomes harder.
  • Sector matters most: If you are in healthcare, finance, employment, or public services, sector-specific federal guidance is already binding and likely more relevant than general state AI laws.

Checklist: Do You Understand This?

  • What did Biden's AI EO require, and why was it revoked?
  • What does Trump's December 2025 EO on AI preemption actually direct agencies to do?
  • What are the four functions of the NIST AI RMF?
  • Name three state AI laws that took effect in 2026 and their main provisions.
  • Why do California SB-53 and Texas RAGA reference NIST AI RMF?
  • Why do sector-specific rules (FDA, SEC, EEOC) matter more than general AI laws for many builders?