Obligations directory
AI compliance obligations
The concrete requirements underneath every AI framework — risk management systems, bias audits, impact assessments, documentation, oversight, transparency, incident reporting. Each obligation links to the vendors that help you meet it.
All obligations
9 obligations
AI Impact Assessment
Documented assessment of an AI system's intended use, risks, safeguards, and monitoring, completed before deployment and annually thereafter.
Bias Audit
Independent testing of an AI system for disparate impact across protected classes, with public summary of results.
Data & Data Governance
Controls on training, validation, and testing data — quality, representativeness, bias examination, and documentation.
Human Oversight
Meaningful human review of AI outputs, particularly for high-risk and consequential decisions.
Incident Reporting
Process for detecting, documenting, and reporting AI system malfunctions or algorithmic discrimination to regulators within defined timelines.
Post-Market Monitoring
Ongoing monitoring of AI system performance, drift, and incidents after deployment.
Risk Management System
A documented, iterative process to identify, analyze, evaluate, and mitigate risks from an AI system throughout its lifecycle.
Technical Documentation
Detailed documentation of a model's training data, architecture, performance metrics, limitations, and intended use — required for conformity assessment and audit.
Transparency & Notice to Individuals
Clear notice to individuals when AI is used for consequential decisions and meaningful information about the logic involved.
How obligations connect to frameworks
Frameworks like the EU AI Act, NIST AI RMF, and ISO/IEC 42001 overlap substantially on what they require. Indexing by obligation — rather than only by framework — makes it easier to see which vendor capabilities map to which concrete deliverables.
See the frameworks directory for the regulation-side view.