AI compliance frameworks
A guide to the regulations, laws, and voluntary standards shaping how organizations build, deploy, and monitor AI systems. Each framework page lists key obligations and the vendors that support them.
Colorado Artificial Intelligence Act (SB 24-205)
In forceUS
Colorado SB 24-205 is the first US state comprehensive AI law, focused on preventing algorithmic discrimination in consequential decisions (employment, lending, housing, insurance, education, healthcare, legal services, and government services). Developers and deployers of high-risk AI systems must use reasonable care to protect consumers and complete impact assessments.
EU Artificial Intelligence Act
In forceEU
The EU AI Act is the first comprehensive horizontal regulation of artificial intelligence globally. It takes a risk-based approach, classifying AI systems as unacceptable, high-risk, limited-risk, or minimal-risk, and imposes corresponding obligations on providers, deployers, importers, and distributors. High-risk system obligations include risk management, data governance, technical documentation, transparency, human oversight, and conformity assessment.
GDPR Article 22 — Automated Individual Decision-Making
In forceEU
GDPR Article 22 grants data subjects the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal or similarly significant effects. Data controllers deploying AI for such decisions must implement safeguards including human intervention, explanation, and contestation rights.
NYC Local Law 144 (Automated Employment Decision Tools)
In forceUS
NYC Local Law 144 requires employers and employment agencies using automated employment decision tools (AEDTs) for hiring or promotion of candidates for positions in NYC to conduct an annual independent bias audit, publish the audit summary, and provide notice to candidates.
SEC AI-Related Disclosure Requirements
In forceUS
The SEC has brought enforcement actions and issued guidance requiring public companies to accurately disclose their use of AI in securities filings, avoid "AI washing," and ensure investment advisers do not misrepresent their use of AI in client communications.
ISO/IEC 42001:2023 AI Management System
Voluntary standardGlobal
ISO/IEC 42001 is the first international certifiable management system standard specifically for AI. It specifies requirements for establishing, implementing, maintaining, and continually improving an AI management system (AIMS). Increasingly treated by procurement teams as the SOC 2 equivalent for AI — a signal that an organization has mature, auditable AI governance.
NIST AI Risk Management Framework
Voluntary standardUS
The NIST AI Risk Management Framework (AI RMF 1.0) provides a voluntary, rights-preserving, non-sector-specific, and use-case-agnostic approach to managing risks from AI. It is organized around four core functions — Govern, Map, Measure, and Manage — and is widely adopted by US federal agencies and enterprises as the de facto governance baseline.
UK AI Regulation Framework
Voluntary standardUK
The UK's pro-innovation, context-specific approach to AI regulation relies on existing regulators (ICO, FCA, CMA, MHRA, Ofcom) applying five cross-sectoral principles: safety; transparency; fairness; accountability; and contestability. A central AI Safety Institute evaluates frontier models.