The NIST AI Risk Management Framework (AI RMF 1.0) provides a voluntary, rights-preserving, non-sector-specific, and use-case-agnostic approach to managing risks from AI. It is organized around four core functions — Govern, Map, Measure, and Manage — and is widely adopted by US federal agencies and enterprises as the de facto governance baseline.
What does NIST AI RMF actually require?
Key obligations include: Govern: establish a culture of risk management; Map: identify context and categorize AI risks; Measure: assess, analyze, and track risks; Manage: prioritize risks and act on them; Generative AI Profile (NIST AI 600-1) — July 2024.
Who is in scope of NIST AI RMF?
NIST AI RMF is voluntary in US. Scope attaches based on jurisdiction and the role a company plays in the AI supply chain. See /frameworks/nist-ai-rmf for the full scope note and source links.
When does NIST AI RMF take effect?
The primary enforcement date is 2023-01-26. Some provisions may phase in earlier or later — see the framework brief for the full timeline.
What are the penalties?
Maximum penalties: Voluntary framework; no statutory penalties. Enforcement is carried out by the designated authorities in the jurisdiction.
Which vendors help with NIST AI RMF compliance?
In our directory, the following vendors reference NIST AI RMF in their compliance coverage: Credo AI, Holistic AI, Fiddler AI, Arthur, Robust Intelligence, Monitaur, Trustible, FairNow, Fairly AI, Saidot, LatticeFlow AI, Lakera. Each profile links to the public source for the claim.