Free tool
AI Risk Register
An editable risk register pre-populated with 14 vetted AI-specific risks. Each row is mapped to a framework reference (NIST AI RMF, ISO/IEC 42001, OWASP LLM Top 10, or EU AI Act Article 9) and a default control. Export to XLSX with a summary sheet, CSV, or markdown.
Start with 14 vetted risks
The starter set covers bias, privacy, security (OWASP LLM Top 10), accuracy, transparency, oversight, supply chain, legal, operational, and reputational categories — each pre-mapped to a framework reference and a default control. You can edit, delete, or add rows from there.
What an AI risk register is, and isn't
A risk register is the operational artefact that bridges policy and engineering. Policy says "we will manage risk"; engineering builds, ships, and patches systems; the register is where the two meet. Each row identifies a specific risk, the framework provision that recognises it, the control intended to mitigate it, who owns the control, and how the risk is being tracked. NIST's MEASURE function (in AI RMF) and ISO/IEC 42001's Annex A controls both assume that this register exists; without one, claiming compliance with either is largely aspirational.
What it is not: a substitute for security testing, a substitute for legal review, or a way to transfer risk away from the AI system's deployer. Logging a risk in a register and then never reviewing it is worse than not logging it at all, because it creates a paper trail showing the deployer was aware.
How the starter risks were chosen
The 14 pre-filled risks come from the intersection of four sources:
- NIST AI Risk Management Framework 1.0 — the four core functions (GOVERN, MAP, MEASURE, MANAGE) and the AI RMF Playbook, particularly MAP 2 (categorisation of risks) and MEASURE 2 (assessment).
- ISO/IEC 42001:2023 — Annex A operational controls, particularly A.6 (AI system lifecycle), A.7 (data for AI systems), and A.10 (third-party relationships).
- OWASP Top 10 for LLM Applications v2025 — LLM01 (prompt injection), LLM02 (sensitive information disclosure), LLM03 (supply chain), LLM04 (data and model poisoning), and the rest of the list. These are the security risks specific to large language models and they are not fully captured by traditional security frameworks.
- EU AI Act Article 9 — the providers' risk management system requirement, which mandates identification, evaluation, mitigation, and continuous monitoring of risks throughout the AI system's lifecycle.
Each starter row references the most specific control or article we could attach it to. You will almost certainly need to add organisation-specific risks that the starter set does not cover — domain risks (clinical, financial, employment), jurisdiction risks (Colorado AI Act, NYC Local Law 144), and product-specific risks (your particular threat model).
Likelihood and severity scales
Every cell uses a 1–5 scale because that is what most enterprise risk teams already use. The risk score (likelihood × severity) lands between 1 and 25; the heatmap colours scores 1–4 as low, 5–9 as medium, 10–15 as high, and 16–25 as critical. None of those bands are mandated by any framework — they are conventions. If your enterprise risk taxonomy already defines bands, change the colour mapping in your local fork (or change them in the export).
XLSX export structure
The downloaded workbook contains two sheets: Risk register with one row per risk and the columns visible on screen, and Summary with the organisation, system name, generation date, total risk count, and a list of frameworks referenced with their canonical URLs. This is the structure most auditors expect; if your auditor wants a different layout, export and reformat in Excel or Google Sheets.
No data leaves your browser
The register lives entirely in browser memory. We do not persist it to a database, and we do not transmit it anywhere. Closing the tab discards the register. If you want it back, download the .xlsx — that file is yours to manage.
Pair with
A risk register on its own does not satisfy the EU AI Act. For high-risk Annex III systems, pair this register with an EU AI Act classification record and a fundamental rights impact assessment. When you need software to operationalise the register at scale, the AI Compliance Vendors matchmaker ranks vendors by documented coverage of the frameworks the register references.
Sources
- NIST AI Risk Management Framework 1.0 — nist.gov
- ISO/IEC 42001:2023 (AI management systems) — iso.org
- OWASP Top 10 for LLM Applications v2025 — genai.owasp.org
- EU AI Act Article 9 (risk management) — eur-lex.europa.eu
- MITRE ATLAS (adversarial threat matrix for AI) — atlas.mitre.org