AI Impact Assessment Template: Free Download (NIST, ISO 42001, EU AI Act)
A free, ready-to-use AI Impact Assessment (AIIA) template mapped to NIST AI RMF, ISO/IEC 42001, EU AI Act Article 27, and the Colorado AI Act. Download the branded 13-page PDF and adapt it to your deployments.
By ACV Editorial · April 24, 2026 · 11 min read · Last reviewed April 24, 2026
AI Impact Assessment Template: Free Download (NIST, ISO 42001, EU AI Act)
Regulators, auditors, and customer security teams increasingly expect a documented AI Impact Assessment — the AI-system equivalent of the Data Protection Impact Assessment familiar from GDPR Article 35. Whether the trigger is EU AI Act Article 27, the Colorado AI Act's duty to complete an impact assessment "at least annually," ISO/IEC 42001's AI management system controls, or NIST AI RMF's MAP–MEASURE–MANAGE loop, the underlying artifact is the same: a structured record that identifies the AI system, its intended purpose, its risks, its mitigations, and the approval chain.
We've packaged that artifact as a free 13-page PDF template you can start using immediately.
<p style="margin:1.5rem 0; padding:1.25rem 1.5rem; border:1px solid var(--color-border); border-radius:0.875rem; background:var(--color-surface);"> <strong>Download:</strong> <a href="/downloads/AI-Impact-Assessment-Template.pdf">AI Impact Assessment Template (PDF, 13 pages)</a><br/> <span style="color:var(--color-text-muted); font-size:0.875rem;">Branded, print-ready, reviewer-tested. No email gate. Version 1.0 · Reviewed 2026-04-24.</span> </p>
This post explains (1) what an AI Impact Assessment is and when it's required, (2) what's inside the template and how each section maps to a specific regulatory requirement, (3) how to complete one without making it a bureaucratic exercise, and (4) how to evidence it during an audit.
What Is an AI Impact Assessment?
An AI Impact Assessment (variously AIA, AIIA, FRIA, or Algorithmic Impact Assessment in different regimes) is a documented analysis of the foreseeable harms an AI system may cause to individuals, groups, or society, paired with the technical and organisational measures taken to mitigate those harms. It is produced before the system is deployed in a production decision, and reviewed periodically thereafter.
The concept has explicit statutory and standards-based origins:
- The NIST AI Risk Management Framework formalised a four-function lifecycle (GOVERN, MAP, MEASURE, MANAGE) in January 2023, with a GenAI profile (NIST-AI-600-1) added in July 2024.
- ISO/IEC 42001:2023 — the first certifiable AI management system standard — requires organisations to "perform an AI system impact assessment" (Control A.5.2) when the system has the potential for adverse impact.
- ISO/IEC 23894:2023 gives the detailed risk-management guidance for ISO/IEC 42001 implementations.
- EU AI Act Article 27 requires a Fundamental Rights Impact Assessment (FRIA) before certain public bodies and private entities deploy high-risk AI systems. The Article 27 obligation applies from 2 August 2026.
- Colorado Senate Bill 24-205 ("Colorado AI Act") requires deployers of high-risk AI systems to "complete an impact assessment … at least annually" starting 30 June 2026 (effective date revised by SB 25B-004 signed 28 August 2025).
- NYC Local Law 144 requires a bias audit for automated employment decision tools and public summaries of audit results.
In every regime, the impact assessment is the primary artifact an auditor or enforcement agency asks for. A template that maps cleanly to more than one regime saves enormous time.
When an AIIA Is Required
| Trigger | Who must complete it | Cadence |
|---|---|---|
| EU AI Act Art. 27 | Public-sector deployers and private deployers of high-risk AI in specific sectors | Before first deployment; updated on material change |
| Colorado AI Act | Deployers of high-risk AI systems | At least annually and within 90 days of a material modification ([Colo. Rev. Stat. § 6-1-1703(3)]) |
| ISO/IEC 42001 Control A.5.2 | Any organisation seeking certification that operates AI systems with adverse-impact potential | On initial deployment and at planned intervals |
| NIST AI RMF MAP 1.1–5.2 | Voluntary framework; widely adopted and cited as safe harbor under TRAIGA | Lifecycle-integrated; revisited at each lifecycle transition |
| Texas TRAIGA | Not required by statute, but documentation consistent with NIST AI RMF is an affirmative defense ([HB 149 § 552.105(e)(2)(D)]) | Recommended pre-deployment |
| NYC Local Law 144 | Employers using AEDTs for hiring/promotion decisions | Annual independent bias audit; public summary |
| Illinois HB 3773 | Employers using AI for recruiting/hiring decisions (effective 1 Jan 2026) | Best practice alongside required candidate notice |
In practice, most mid-size and larger organisations should be running an AIIA on every production AI system, not just regulated ones — it doubles as the engineering design review, the security threat model, and the customer security-questionnaire backup.
What's Inside the Template
The AI Impact Assessment Template is a 13-page PDF organised into nine sections plus appendices. Each section maps directly to a NIST AI RMF subcategory, an ISO/IEC 42001 control, or an EU AI Act requirement.
1. System Identification
Name, version, lifecycle stage, owner, date. Maps to NIST AI RMF MAP 1.1 (Intended purposes, potentially beneficial uses, context-specific laws, norms and expectations, and prospective settings of AI system use are understood and documented) and ISO/IEC 42001 A.6.2.3 (Documentation of AI system).
2. Intended Purpose and Use Context
What the system does, the user population, the geographic scope, the integration points. This is the single most important section — most post-deployment harm comes from off-label use, not faulty model behavior. Writing the intended-purpose statement tightly makes every subsequent control defensible.
3. Stakeholders and Affected Groups
Deployer, developer, end users, subjects of automated decisions, regulators, third parties. Demographic breakdown of subjects where applicable. Maps to NIST AI RMF MAP 1.2 and EU AI Act Article 27(1)(a) — FRIA requires "a description of the deployer's processes in which the high-risk AI system will be used in line with its intended purpose" and "a description of the categories of natural persons and groups likely to be affected."
4. Data Inventory
Training data provenance, licensing status, sensitive categories, retention, quality controls, synthetic-data disclosure. Links this AIIA to the DPIA if the system processes personal data under GDPR.
5. Risk Identification
Harm categories: bias and discrimination, privacy violations, security attacks (prompt injection, data poisoning, model extraction), safety failures, intellectual-property exposure, environmental impact, socio-economic displacement. Each risk gets a likelihood × severity score plus an owner.
6. Risk Mitigation Plan
For each identified risk: the control (technical, organisational, procedural), the owner, the due date, the evidence artifact. Maps to NIST AI RMF MANAGE 1.2 and ISO/IEC 42001 Clause 6.1.3.
7. Residual Risk Sign-Off
The accountable executive's explicit acceptance of whatever risk remains after mitigations. This is the line that matters in enforcement — without it, the AIIA is just a document; with it, it's a governance artifact.
8. Monitoring and Review Plan
KPIs, drift-detection metrics, incident-response triggers, scheduled review date. Maps to NIST AI RMF MEASURE 4 and EU AI Act Article 61 (post-market monitoring).
9. Change Log
Version history of the assessment itself. Required to demonstrate the "at least annually" and "material modification" cadences under the Colorado AI Act and EU AI Act.
Appendix A: Regulatory Crosswalk
A one-page table showing which AIIA section satisfies which obligation across NIST, ISO, EU AI Act, and Colorado. Use this page when responding to auditor or customer requests — it demonstrates the mapping without rework.
Appendix B: Worked Example
A fully completed AIIA for a fictional but realistic scenario (a resume-screening AI at a mid-size employer under Illinois HB 3773 and NYC Local Law 144) so reviewers can see what "good" looks like.
How to Use the Template
- Start with a system you already have in production. Do not start with a hypothetical. Running an AIIA on a real system exposes the missing controls you actually need to build.
- Invite the three roles in one room. The AIIA cannot be completed by one person. It needs (a) the engineering owner who knows what the model does, (b) the legal/privacy/compliance owner who knows which regimes apply, and (c) the business owner who owns the residual risk sign-off. Thirty minutes of these three around a table beats five rounds of async comments.
- Write the intended-purpose statement tightly. The narrower the intended purpose, the easier every downstream section becomes. "Draft a reply suggestion shown only to a support agent, who then edits before sending" is a vastly easier system to govern than "customer service assistant."
- Score risks honestly. If your organisation has a red-team function, bring them into Section 5. The worst AIIAs are the ones that score every risk "low." The second worst are the ones that score every risk "high." A good AIIA reads like the output of a thoughtful argument.
- Pick owners for residual risk, not just controls. Every row in Section 6 has a control owner. Section 7 has exactly one accountable executive. That's the person the regulator will ask for first.
- Review quarterly, not just annually. The "at least annually" language in the Colorado AI Act is a floor. In practice, material model or data changes are happening at quarterly cadence in most organisations running modern AI systems.
Relationship to Other Assessments
Teams frequently ask whether an AIIA replaces or supplements their existing assessments. The short answer:
- DPIA (GDPR Art. 35) / PIA: The AIIA supplements the DPIA for personal data processing. The DPIA answers "are we lawfully processing personal data?"; the AIIA answers "is the model itself safe, fair, and fit for purpose?" A single AI system typically needs both.
- SOC 2 / ISO 27001: Security certifications cover the environment the AI system runs in; the AIIA covers the AI system itself. They are orthogonal. An ISO/IEC 42001 certification — which explicitly requires an AI impact assessment — sits on top of ISO 27001, not instead of it.
- Fundamental Rights Impact Assessment (FRIA): FRIA is the EU AI Act's named variant. Our template has been reviewed against the Commission's FRIA template and aligns with Article 27(1) elements (a)–(f); the Commission's official FRIA template is expected to be published by the AI Office, at which point we will release a cross-walked addendum.
- Algorithmic Impact Assessment (AIA — Canadian): The Canadian Treasury Board Directive on Automated Decision-Making uses the term "Algorithmic Impact Assessment" with a distinct scoring tool. Our template captures the underlying content but does not reproduce the Canadian AIA numeric score.
Evidencing the AIIA in Audits
Every AIIA section in this template is designed to produce an evidence link — a URL, a ticket ID, a change-management record, or a file hash. When an auditor requests proof, you do not hand them the PDF; you hand them the AIIA plus the referenced evidence. The richer the evidence trail, the faster the audit.
For ISO/IEC 42001 certification specifically, the AIIA is the primary artifact the auditor will inspect for Control A.5.2. Keep the change log up to date, and ensure Section 7 sign-off is timestamped and tied to a named executive.
Licensing
The template is released under a permissive licence: you may use it internally, modify it, and redistribute modified versions. We ask that you do not remove the "Reviewed by ACV Editorial" footer if you publish the template unmodified externally. There is no email gate, no tracking pixel, and no login wall.
Download the AI Impact Assessment Template (PDF) →
Further Reading
- The Texas AI Act (TRAIGA): Complete Compliance Guide for 1 January 2026 — why NIST AI RMF (and an AIIA mapped to it) is an affirmative defense under Texas law
- NIST AI RMF vs ISO/IEC 42001 — choosing a primary framework before running your first AIIA
- State AI law tracker — the 20+ US state AI laws that trigger impact-assessment duties
- Colorado AI Act framework page — full obligations under SB 24-205
- ISO/IEC 42001 framework page — the AI management system standard
Sources
- NIST AI Risk Management Framework (AI RMF 1.0) — https://www.nist.gov/itl/ai-risk-management-framework
- NIST AI 600-1 — Generative AI Profile (July 2024) — https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf
- ISO/IEC 42001:2023 — AI Management System — https://www.iso.org/standard/42001
- ISO/IEC 23894:2023 — AI Risk Management Guidance — https://www.iso.org/standard/77304.html
- EU AI Act Article 27 (Fundamental Rights Impact Assessment) — https://artificialintelligenceact.eu/article/27/
- Colorado SB 24-205 — Consumer Protections for Interactions with AI Systems — https://leg.colorado.gov/sites/default/files/2024a_205_signed.pdf
- Colorado SB 25B-004 (effective-date amendment) — https://leg.colorado.gov/bills/sb25b-004
- Texas HB 149 (TRAIGA) full enrolled text — https://capitol.texas.gov/tlodocs/89R/billtext/html/HB00149F.htm
- GDPR Article 35 — Data Protection Impact Assessment — https://gdpr-info.eu/art-35-gdpr/
- NYC Local Law 144 — Automated Employment Decision Tools — https://www.nyc.gov/site/dca/about/automated-employment-decision-tools.page
- Illinois HB 3773 — Employment of AI — https://www.ilga.gov/legislation/publicacts/103/103-0804.htm
- Canadian Treasury Board Directive on Automated Decision-Making — https://www.tbs-sct.canada.ca/pol/doc-eng.aspx?id=32592
- European Commission — AI Office & EU AI Act implementation — https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
Keep reading
Templates & tools
The AI Vendor Due Diligence Questionnaire (Free Template)
SIG and CAIQ weren't designed for AI. A 30-question DDQ template covering training data, model provenance, hallucination rates, prompt injection, and IP indemnification.
Frameworks
The Texas AI Act (TRAIGA): Complete Compliance Guide for January 1, 2026
Texas HB 149 takes effect January 1, 2026. This guide walks through prohibited practices, penalties up to $200,000 per violation, the 60-day cure period, NIST AI RMF safe harbor, and the 36-month sandbox — with every provision cited to primary source.
Audit & assurance
AI Audit Firms Compared: Big 4 vs Boutique Specialists in 2026
Algorithmic audits, conformity assessments, and impact assessments are different. How Big 4 firms and boutique AI audit specialists compare on scope, cost, and credentials.