AI Impact Assessment Template: The Complete How-To Guide (2026)
A practical template and step-by-step methodology for AI Impact Assessments aligned with the EU AI Act (Article 27 FRIA), Colorado AI Act §6-1-1703, NYC Local Law 144, NIST AI RMF, and ISO/IEC 42001 Annex A.5.2. Includes scope, risk taxonomy, stakeholder engagement, controls mapping, monitoring, and sign-off.
By AI Compliance Vendors Editorial · Published April 25, 2026 · Last verified April 25, 2026
An AI Impact Assessment (AIIA) is no longer a voluntary best-practice exercise. Colorado, the EU, New York City, and Canada have all turned it into a legal requirement, and ISO 42001 has embedded it in the international standard for AI management systems. Yet most organizations still approach the AIIA as a paperwork checkbox rather than a structured risk-reduction process.
This guide gives compliance professionals, legal teams, and AI product owners everything they need: the regulatory landscape, a framework comparison, a role-by-role responsibility map, a 12-section content blueprint, a risk-scoring methodology, and—in Section 8—a full sample template you can copy directly into a Word document and adapt for your first assessment.
1. What an AI Impact Assessment Is and What Regulations Require It
An AI Impact Assessment is a documented evaluation of the potential harms, biases, and rights implications of an AI system before it is deployed, and on a recurring basis thereafter. It answers four core questions: What does this system do? Who might be harmed by it and how? How likely and severe is that harm? What controls reduce it?
The term "AI impact assessment" is used loosely across jurisdictions. Depending on the regulatory context you are working in, the same concept appears under different names—Fundamental Rights Impact Assessment (FRIA), Algorithmic Impact Assessment (AIA), or simply "impact assessment." The table below maps each regime to its specific AIIA obligation.
| Regulation | Obligation | Who Must Complete It | Trigger | Retention / Notification |
|---|---|---|---|---|
| Colorado SB 24-205, § 6-1-1703 | Impact assessment covering purpose, algorithmic discrimination risk, data categories, performance metrics, transparency measures, and post-deployment safeguards | Deployers of high-risk AI systems | At deployment; annually; within 90 days of any substantial modification | 3 years; produce to AG within 90 days of request |
| EU AI Act, Article 27 | Fundamental Rights Impact Assessment (FRIA) covering processes, affected populations, specific rights risks, human oversight, and incident response | Deployers that are public-law bodies or provide public services; deployers of Annex III point 5(b)/(c) systems | Prior to first deployment | Notify market surveillance authority; submit AI Office questionnaire template |
| NYC Local Law 144 / DCWP Rule | Independent bias audit of automated employment decision tools (AEDTs) calculating impact ratios for sex, race/ethnicity, and intersectional categories | Employers and employment agencies using AEDTs for NYC residents | Within 1 year before use; annually thereafter | Summary of results publicly posted on website; candidate notice 10 business days before use |
| Canada AIDA / Directive on Automated Decision-Making | Algorithmic Impact Assessment (AIA): 65 risk questions + 41 mitigation questions yielding an Impact Level I–IV | Federal government departments using automated decision systems | At design phase; updated before production release; reviewed on schedule | Published on Open Government Portal in both official languages |
| ISO/IEC 42001, Clause 6.1.4 + Annex A.5 | AI system impact assessment as part of the AI Management System (AIMS); identify, analyse, evaluate, and treat impacts on individuals and society | Organizations seeking certification or conforming to the standard | During design and development; reassessed on material system changes | Documented in AIMS; available to interested parties as appropriate |
Colorado SB 24-205 in Depth
Colorado SB 24-205, signed May 17, 2024 and effective June 30, 2026, is the first comprehensive state AI law in the United States. Section 6-1-1703 requires deployers of "high-risk AI systems"—defined as systems that make or are a substantial factor in making a "consequential decision" in employment, lending, housing, healthcare, insurance, education, government services, or legal services—to complete an impact assessment before deployment and at least annually afterward. Assessments must be retained for three years and produced to the Colorado Attorney General within 90 days of request. Small deployers (fewer than 50 FTE who do not train the system on their own data) are exempt from the impact assessment requirement if they make any developer-provided assessment available to affected consumers.
EU AI Act Article 27 in Depth
Article 27 of the EU AI Act requires specific deployers—public-law bodies and entities providing public services—to conduct a Fundamental Rights Impact Assessment before deploying high-risk AI systems listed in Annex III (excluding point 2 systems, e.g., road traffic management). The FRIA must describe deployment processes, frequency of use, categories of affected persons, specific rights harms, human oversight measures, and remediation arrangements. Results must be notified to the relevant market surveillance authority. Article 27(4) clarifies that an existing DPIA conducted under GDPR Article 35 can be complemented by—rather than replaced by—the FRIA, allowing organizations to produce a single integrated document.
Canada's AIA Tool
The Government of Canada's Algorithmic Impact Assessment is a mandatory tool under the Treasury Board's Directive on Automated Decision-Making. It is a 106-question instrument (65 risk, 41 mitigation) that produces an Impact Level (I–IV) based on weighted scores across project, system, algorithm, decision, impact, and data domains. Level IV systems (scoring 76–100%) carry the heaviest mitigation requirements, including algorithmic explainability, peer review by a third-party expert, and a formal recourse mechanism. The AIA is open-source and freely available to private organizations as a reference model.
ISO/IEC 42001 Annex A
ISO/IEC 42001's Clause 6.1.4 and Annex A control A.5 require organizations to define and execute a structured AI system impact assessment process that identifies impacts on individuals, groups, and society; rates severity and likelihood; selects mitigation controls from Annex A; and records residual risks. The companion standard ISO/IEC 42005:2025 provides in-depth guidance on conducting these assessments and is the reference document for organizations seeking to operationalize Clause 6.1.4.
2. AIIA vs. DPIA vs. FRAIA vs. ALTAI: Which Framework When
These four instruments are frequently confused because they overlap in scope. The matrix below clarifies their distinct purposes, mandatory triggers, and when to use them together.
| Framework | Primary Focus | Legal Basis | Mandatory For | Key Output |
|---|---|---|---|---|
| DPIA (Data Protection Impact Assessment) | Privacy and data protection risks from personal data processing | GDPR Art. 35; UK GDPR | High-risk personal data processing using new technologies | Internal risk record; consult supervisory authority if high residual risk |
| AIIA / FRIA (AI Impact Assessment / Fundamental Rights Impact Assessment) | Full spectrum of fundamental rights; algorithmic discrimination; societal harm | EU AI Act Art. 27; Colorado SB 24-205; Canada Directive | Deployers of high-risk AI systems (scope varies by jurisdiction) | Regulatory notification; audit-ready documentation |
| FRAIA (Dutch Fundamental Rights and Algorithms Impact Assessment) | Human rights risks in algorithmic government decision-making | Netherlands national policy; internal government directive | Dutch government bodies deploying algorithmic systems | Structured discussion record; publication on algorithm register |
| ALTAI (Assessment List for Trustworthy AI) | Voluntary self-assessment against EU Ethics Guidelines' 7 Trustworthy AI requirements | European Commission High-Level Expert Group guidance | Voluntary (pre-regulation; often used as governance baseline) | Self-assessment checklist; internal improvement actions |
When to use each:
- DPIA only: Your AI system processes personal data but does not make consequential decisions in the domains covered by Colorado or EU AI Act Annex III (e.g., an internal HR analytics dashboard that does not influence hiring).
- AIIA/FRIA only: Your AI system makes consequential decisions but does not process personal data (rare in practice).
- DPIA + AIIA/FRIA (integrated): The most common scenario for regulated organizations. EU AI Act Art. 27(4) explicitly allows a single document satisfying both obligations. Under Colorado SB 24-205 §6-1-1703(e), an assessment completed for another law satisfies Colorado's requirement if it is "reasonably similar in scope and effect."
- FRAIA: Required if you are a Dutch government body; recommended for any public-sector body in Europe as a structured supplement to DPIA and FRIA requirements.
- ALTAI: Use as a voluntary self-assessment baseline when beginning your AI governance program, or as preparation before completing a mandated AIIA. The ALTAI tool recommends completing a FRIA before beginning its checklist.
3. Who Must Complete an AIIA: Deployer vs. Developer Responsibilities
The AIIA obligation falls primarily on deployers—entities that put AI into operational use—but developers carry substantial documentation duties that directly enable the deployer's assessment.
Deployer Obligations
A deployer is any organization that uses an AI system to make or substantially influence consequential decisions about real people. Under Colorado SB 24-205, this means any entity operating in Colorado whose AI affects residents in the eight covered domains. Under the EU AI Act, deployers are defined in Art. 3(4) and their FRIA obligation under Art. 27 applies to specific categories.
Deployers must: - Complete the initial AIIA before deployment (EU, Canada) or within 90 days of the effective date (Colorado). - Repeat the assessment annually and upon any intentional and substantial modification. - Implement the risk controls identified in the assessment. - Retain records for the required period (3 years under Colorado; subject to AIMS review cycles under ISO 42001). - Notify regulators where required (EU Art. 27 market surveillance authority; Colorado AG upon discovering discrimination risk). - Make assessment outputs available to consumers where required (Colorado small-deployer exception).
Developer Obligations
Developers—entities that build, substantially modify, or train high-risk AI systems—do not directly complete the deployer's AIIA, but they must provide the documentation that makes it possible. Under Colorado SB 24-205 §6-1-1702, developers must supply deployers with:
- A general statement describing reasonably foreseeable uses and known harmful uses.
- High-level summaries of training data and data governance measures.
- Documentation of how the system was evaluated for algorithmic discrimination.
- Intended use cases, limitations, and technical capabilities.
- Artifacts such as model cards, dataset cards, or prior impact assessments necessary for the deployer to complete its own assessment.
A developer that also serves as a deployer for its own system is not required to produce a separate developer documentation package unless the system is also provided to an unaffiliated entity.
Who Does the Work Inside an Organization
Best practice across Canada's AIA guidance and the EU AI Office's emerging questionnaire template is a multi-disciplinary assessment team that includes:
| Role | Contribution |
|---|---|
| AI / ML Engineers | System architecture, model behavior, data lineage |
| Legal / Compliance | Regulatory scope determination, legal basis, liability |
| Privacy / DPO | DPIA coordination, data category mapping |
| Product Owner | Use case definition, deployment context, user journey |
| HR / People Team | Employment law, workforce impact (if applicable) |
| Affected Community Representatives / Civil Society | Lived-experience input, bias identification |
| Risk / Audit | Risk scoring, control selection, documentation standards |
4. The 12 Sections Every AIIA Should Contain
The following 12 sections reflect the union of requirements across Colorado SB 24-205, EU AI Act Art. 27, Canada's AIA, ISO 42001 Annex A.5, and leading practice from the Future of Privacy Forum and IAPP. A single AIIA that contains all 12 sections will satisfy the core content requirements of every major jurisdiction currently in force.
4.1 System Identification and Deployment Context
Document the system's formal name, version, vendor (if third-party), the organizational owner, and the precise deployment context—the business process it supports, the decisions it makes or influences, the geographic scope, and the population of affected individuals. This section establishes the scope boundary for everything that follows and provides the factual predicate that regulators examine first.
4.2 Intended Purpose, Use Cases, and Benefits
State clearly what the system was designed to do, the specific use cases it is authorized for, and the benefits it is expected to provide to the organization and to affected persons. This section corresponds directly to Colorado §6-1-1703(b)(I) and EU Art. 27(1)(a). It also establishes the baseline against which misuse and scope creep can later be detected during post-deployment monitoring.
4.3 Legal Basis and Regulatory Scope
Identify every law, regulation, and contractual obligation that applies to this deployment: privacy law (GDPR, CCPA, state privacy statutes), sector-specific law (HIPAA, FCRA, ECOA), and AI-specific law (Colorado SB 24-205, EU AI Act, NYC LL 144). Determine whether a DPIA is also required and, if so, whether it will be integrated into this document or maintained separately. Document the legal authority under which consequential decisions are made.
4.4 Affected Populations and Categories of Data
Identify all categories of persons who interact with, are evaluated by, or are otherwise affected by the system. Map the data inputs (categories of personal and non-personal data the system processes) and outputs (types of decisions or recommendations produced). Flag any sensitive categories: race, ethnicity, sex, national origin, disability status, religion, sexual orientation, citizenship/immigration status, genetic data, biometric data, financial data, health data. The Canada AIA calls this the "impact" domain and weights it as the single highest-scoring risk area (maximum 52 of 169 raw points).
4.5 Risk Identification
Enumerate every potential harm the system could cause to individuals, groups, or society, distinguishing between harms that flow from correct outputs (e.g., a creditworthy applicant correctly denied because of a facially neutral criterion that is nevertheless a proxy for race) and harms from incorrect outputs (e.g., a misclassified medical image). Organize harms by type: allocative (denial of opportunity or resource), representational (misrepresentation of group characteristics), quality-of-service (degraded performance for subgroups), and physical/dignitary harms.
4.6 Risk Scoring (Likelihood × Severity × Exposed Population)
Assign a quantitative risk score to each identified harm using the methodology described in Section 5 of this guide. Document the score, the rationale for each dimension rating, and whether the resulting risk level falls within the organization's defined risk tolerance. This section produces the risk register that informs Section 4.7.
4.7 Existing and Proposed Mitigation Controls
For each risk rated above tolerance, document the controls already in place (technical, procedural, or organizational) and any additional controls proposed to reduce residual risk. Controls should be mapped to a recognized framework: NIST AI RMF Govern/Map/Measure/Manage subcategories, or ISO 42001 Annex A controls (A.5 through A.11). Include the owner, implementation timeline, and a residual risk score after controls are applied.
4.8 Algorithmic Discrimination Analysis
This section corresponds to Colorado §6-1-1703(b)(II) and EU Art. 27(1)(d). Specifically address whether the system has been tested for disparate impact across protected characteristics. Document the testing methodology (e.g., pre-deployment fairness evaluation, bias audit by independent third party, post-deployment demographic outcome monitoring), the metrics used (impact ratio, equalized odds, calibration), the results, and the steps taken to mitigate any identified disparity. For systems subject to NYC LL 144, the independent bias audit report and impact ratio tables should be incorporated here or appended.
4.9 Human Oversight and Contestability Mechanisms
Describe the human-in-the-loop or human-on-the-loop safeguards in place: who reviews AI outputs before they become final decisions, under what conditions the system can be overridden, and what escalation path exists for edge cases. Document the recourse process for affected individuals—how they can challenge a decision, request human review, and receive an explanation. Under EU AI Act Art. 27(1)(e), human oversight arrangements must be described; under Colorado, deployers must offer human review of adverse decisions unless it poses a safety risk.
4.10 Transparency and Consumer Disclosure
Document the disclosures made to affected individuals: pre-decision notice that AI is in use (required by Colorado §6-1-1703(4)); plain-language description of the system's purpose; explanation of what data is collected; contact information for the deployer; and the public website statement summarizing high-risk AI deployments (Colorado §6-1-1703(5)). If the system is used in a customer-facing context, include the draft disclosure language.
4.11 Post-Deployment Monitoring Plan
Specify how the system's performance will be monitored after deployment: what metrics are tracked (accuracy, fairness indicators, distribution shift, complaint rates), at what frequency, by whom, and what triggers a reassessment or escalation. Under Colorado, annual review is the statutory minimum. Under ISO 42001 Clause 9, organizations must establish performance evaluation processes. Include the process for logging incidents, updating the AIIA when findings change, and reporting to regulators within statutory deadlines (90 days under Colorado and EU Art. 27(2)).
4.12 Sign-Off, Version Control, and Document Retention
Record the names, titles, and signatures of the individuals who reviewed and approved the assessment; the date of completion; the version number; the scheduled next review date; and the document retention policy (minimum three years under Colorado §6-1-1703(f)). For regulated industries, include reference to any data protection officer review, legal review, or third-party audit that was conducted.
5. How to Score Risk: Likelihood × Severity × Exposed Population
The standard two-dimensional risk matrix (likelihood × impact) is necessary but insufficient for AI systems, because AI harms frequently operate at scale. A bias that affects 0.1% of decisions is trivial in a sample of 100 people but catastrophic when the system processes 10 million applications per year. A well-calibrated AIIA risk score incorporates three dimensions.
The Three-Dimension Formula
Risk Score = Likelihood (L) × Severity (S) × Population Exposure Multiplier (P)
Score each dimension on a 1–5 scale as follows:
Likelihood (L): How probable is this harm occurring?
| Score | Label | Description |
|---|---|---|
| 1 | Rare | Requires unusual system failure or adversarial action; no prior incidents |
| 2 | Unlikely | Possible under edge-case conditions; limited evidence of occurrence |
| 3 | Possible | Known failure mode; has occurred in comparable deployments |
| 4 | Likely | Occurs in normal operation for a meaningful fraction of cases |
| 5 | Almost Certain | Routinely observed; structural feature of the system or data |
Severity (S): How bad is the harm if it occurs?
| Score | Label | Description |
|---|---|---|
| 1 | Negligible | Minor inconvenience; no lasting effect; easily corrected |
| 2 | Minor | Temporary disadvantage; correctable with modest effort |
| 3 | Moderate | Material impact on access to services, opportunity, or financial position |
| 4 | Serious | Significant rights violation; legal liability; physical, financial, or dignitary harm |
| 5 | Catastrophic | Irreversible harm; discriminatory exclusion at scale; threat to life or safety |
Population Exposure Multiplier (P):
| Multiplier | Condition |
|---|---|
| 1.0 | Fewer than 1,000 individuals per year affected |
| 1.5 | 1,000–100,000 individuals per year |
| 2.0 | 100,000–1 million individuals per year |
| 2.5 | Over 1 million individuals per year |
Resulting risk bands:
| Score Range | Risk Tier | Action |
|---|---|---|
| 1–6 | Low | Accept; routine monitoring |
| 7–15 | Moderate | Mitigate within standard timeline; document controls |
| 16–30 | High | Priority mitigation; senior sign-off; enhanced monitoring |
| 31–62.5 | Critical | Immediate action; consider halting deployment; escalate to board/DPO/legal |
Qualitative Modifiers
Beyond the numeric score, the Canada AIA and NIST AI RMF identify several qualitative factors that should cause an assessor to manually escalate the risk tier regardless of the numeric score: the presence of sensitive data categories, automation without meaningful human review, irreversibility of decisions, opacity of the model (inability to explain outputs), and the vulnerability of the affected population.
6. Red Flags That Raise the Risk Tier
The following factors, drawn from the Canada AIA, EU AI Act Annex III, Colorado SB 24-205, and NIST AI RMF GOVERN-6 sub-categories, should prompt escalation to a higher risk tier or trigger a mandatory senior review even when the numeric score is moderate.
Sensitive Data Categories The system uses or infers race, ethnicity, national origin, religion, sex, sexual orientation, gender identity, disability, immigration status, genetic data, biometric data (for identification), health or medical data, or financial information. These categories carry elevated harm potential and are the basis for anti-discrimination law in every covered jurisdiction.
Vulnerable Populations Affected individuals include minors, elderly persons, people with disabilities, asylum seekers or undocumented persons, individuals with limited English proficiency, or people in financial distress. Vulnerable populations typically have fewer resources to challenge incorrect decisions and face compounding harms.
Fully Automated Decisioning Without Human Review The system makes or executes a final decision with no human review step before the decision takes effect. This is a risk escalation factor under GDPR Art. 22, Colorado SB 24-205, and EU AI Act, all of which require contestability mechanisms for automated decisions.
Scale of Deployment The system is deployed to more than 100,000 individuals annually, or processes decisions in real time at high volume with minimal sampling-based quality control. Scale amplifies every error rate from negligible to systemic.
Opacity / Lack of Explainability The system uses a black-box model (e.g., deep neural network, large language model) where outputs cannot be traced to interpretable features. Opacity prevents meaningful human oversight, makes it impossible to explain adverse decisions to affected individuals, and obscures discriminatory patterns during post-deployment monitoring.
Training Data with Known or Likely Historical Bias The system was trained on data that reflects historical patterns of discrimination (e.g., historical hiring records, historical loan approval data, criminal justice records). Historical bias in training data is one of the most reliably documented sources of algorithmic discrimination.
Consequential and Irreversible Decisions The decision affects access to healthcare, housing, credit, employment, education, or government benefits, and cannot easily be reversed after it takes effect. An incorrect denial of a mortgage application is far more harmful than an incorrect product recommendation.
Third-Party AI Without Developer Documentation The organization is deploying a commercial AI system and has not received the model cards, dataset cards, evaluation reports, and known-limitation disclosures that Colorado requires developers to provide under §6-1-1702. Deploying without this documentation makes it impossible to complete a good-faith AIIA.
7. Stakeholder Consultation Requirements
A legally defensible AIIA is not a unilateral document completed by one team. All major frameworks require meaningful consultation with a structured set of stakeholders.
Internal Consultation
- Legal and Compliance: Determine regulatory scope, identify applicable laws, assess liability exposure.
- Data Protection Officer / Privacy Team: Confirm DPIA requirement, identify personal data processing, integrate or cross-reference DPIA findings.
- Technical / AI Engineering: Provide model documentation, training data provenance, fairness evaluation results.
- Business / Product Owner: Define intended use cases, confirm deployment scope, commit to post-deployment monitoring resources.
- HR / Employment Law (for workforce-affecting systems): Assess labor law implications, draft employee notices.
External Consultation
- Affected Communities: The Canada AIA requires documenting whether affected client groups were consulted, at what stage, and how feedback was addressed. The EU AI Office's emerging FRIA questionnaire template is expected to include a stakeholder engagement section. The FRAIA from the Netherlands explicitly requires dialogue with affected groups.
- Independent Technical Reviewer: For high-risk systems (Canada AIA Level III–IV, EU AI Act high-risk systems), an external peer review of the technical design and bias testing is required or strongly recommended.
- Legal Services Unit: Mandatory under Canada's Directive on Automated Decision-Making from the concept stage of any automation project.
- Regulatory Pre-Consultation: If DPIA shows high residual risk, GDPR Art. 36 requires prior consultation with the supervisory authority. EU Art. 27(3) requires notifying the market surveillance authority after completing the FRIA.
Documentation of Consultation
Every AIIA should include a consultation log that records: who was consulted, when, in what format (meeting, written review, public comment), what concerns they raised, and how those concerns were addressed or why they were not incorporated. This log is a key element of the Canada AIA's mitigation scoring and will be scrutinized by regulators assessing good faith.
8. Full Sample AIIA Template
The following template is directly copyable. Each H3 section corresponds to one of the 12 content areas from Section 4. Fill in the bracketed prompts with system-specific information. The template is designed to satisfy the content requirements of Colorado SB 24-205 §6-1-1703, EU AI Act Article 27, Canada's AIA structure, and ISO 42001 Annex A.5.
AI IMPACT ASSESSMENT
Document Control
| Field | Details |
|---|---|
| System Name | [Enter official system name and version] |
| Assessment Date | [DD/MM/YYYY] |
| Assessment Version | [e.g., v1.0 — Initial; v2.0 — Annual Review] |
| Next Scheduled Review | [DD/MM/YYYY] |
| Completed By | [Name, Title, Department] |
| Legal / DPO Review | [Name, Title, Date of Review] |
| Approved By | [Name, Title] |
| Approval Date | [DD/MM/YYYY] |
| Retention Period | 3 years from final deployment (per Colorado § 6-1-1703(f)) or as required by applicable law |
Section 1: System Identification and Deployment Context
1.1 System Description - Full system name and version: [e.g., "ResumeScanner Pro v3.2"] - System type: [ ] Predictive model [ ] Classification model [ ] Recommendation engine [ ] Generative AI [ ] Other: ___ - Vendor / developer: [Name of vendor or "In-house developed"] - Deployment environment: [Cloud / on-premises / hybrid] - Geographic scope of deployment: [List all states, countries, or jurisdictions]
1.2 Business Process Context - Describe the specific business process this system supports: > [E.g., "The system screens incoming job applications and assigns each applicant a score between 0 and 100. Applications scoring below 60 are automatically rejected without human review. Applications scoring 60–79 are flagged for recruiter review. Applications scoring 80+ advance to phone screen."] - Frequency and volume of use: [E.g., "Approximately 50,000 applications processed per quarter"] - Countries / U.S. states of affected individuals: [List]
1.3 Organizational Owner - Business unit responsible for deployment: [Name] - System owner (accountable individual): [Name, Title] - Technical point of contact: [Name, Title] - Privacy / DPO point of contact: [Name, Title]
Section 2: Intended Purpose, Authorized Use Cases, and Benefits
2.1 Stated Purpose Provide a plain-language statement of what the system was designed to do and the problem it is intended to solve: > [E.g., "To reduce time-to-hire by automating the initial resume review stage and improving consistency in candidate evaluation."]
2.2 Authorized Use Cases List each use case for which the system is authorized: - [ ] [Use case 1, e.g., Screening applications for open positions] - [ ] [Use case 2] - [ ] [Use case 3]
2.3 Known Unauthorized or Harmful Uses Document use cases explicitly excluded or prohibited: - [ ] [E.g., "System shall not be used to evaluate current employee performance"] - [ ] [E.g., "System shall not be used as the sole basis for termination decisions"]
2.4 Expected Benefits - To the organization: [E.g., reduced time-to-hire, cost savings, recruiter consistency] - To affected individuals: [E.g., faster decision turnaround, more consistent criteria]
Section 3: Legal Basis and Regulatory Scope
3.1 Applicable Laws and Regulations For each applicable law, state whether the system is in-scope and what specific obligation applies:
| Law / Regulation | In Scope? | Applicable Obligation |
|---|---|---|
| Colorado SB 24-205 (if CO residents affected) | [ ] Yes [ ] No | Impact assessment, annual review, consumer notice |
| EU AI Act Art. 27 (if EU deployer / EU residents) | [ ] Yes [ ] No | FRIA prior to deployment; notify market surveillance authority |
| NYC Local Law 144 (if NYC employment decisions) | [ ] Yes [ ] No | Annual independent bias audit; candidate notice |
| GDPR / UK GDPR (if EU/UK personal data processed) | [ ] Yes [ ] No | DPIA (integrate or maintain separately) |
| Canada Directive on Automated Decision-Making | [ ] Yes [ ] No | AIA; publish on Open Government Portal |
| CCPA / CPRA (if CA residents affected) | [ ] Yes [ ] No | ADMT opt-out; risk assessment |
| HIPAA (if health information used) | [ ] Yes [ ] No | PHI safeguards; prohibitions on discriminatory use |
| ECOA / FCRA (if credit decisions involved) | [ ] Yes [ ] No | Adverse action notice; non-discrimination |
| ISO/IEC 42001 (if certification sought) | [ ] Yes [ ] No | AIMS impact assessment per Clause 6.1.4 |
| [Other: ___] | [ ] Yes [ ] No | [Describe] |
3.2 DPIA Integration - Is a separate DPIA required? [ ] Yes [ ] No - If yes, is the DPIA integrated into this document? [ ] Yes — DPIA content included below [ ] No — Maintained as separate document [Document ID: ___]
3.3 Legal Authority for Consequential Decision Describe the legal authority or statutory basis under which the organization is authorized to make the consequential decisions supported by this system: > [E.g., "Employment decisions are made under Colorado labor law. The system is used to assist, not replace, human judgment in the hiring process."]
Section 4: Affected Populations and Data Categories
4.1 Categories of Affected Persons Identify all persons who interact with or are affected by the system's outputs:
| Group | Role | Estimated Annual Volume |
|---|---|---|
| [E.g., Job applicants] | Subject of AI-assisted decision | [E.g., 200,000] |
| [E.g., Current employees (if applicable)] | Subject of AI-assisted decision | [___] |
| [E.g., Third parties (if applicable)] | Indirectly affected | [___] |
4.2 Sensitive Attribute Flags Does the system use, process, or make inferences about any of the following? (Check all that apply)
- [ ] Race or ethnicity
- [ ] National origin or citizenship/immigration status
- [ ] Sex, gender identity, or sexual orientation
- [ ] Religion or belief
- [ ] Disability status or health condition
- [ ] Age (particularly minors or elderly)
- [ ] Financial status or creditworthiness
- [ ] Biometric data (fingerprints, facial recognition)
- [ ] Genetic data
- [ ] Criminal history
- [ ] None of the above
For each checked item, describe how the attribute enters the system (directly or as a proxy) and what controls prevent its use as a discriminatory factor: > [Describe]
4.3 Input Data Categories List all data types the system processes as inputs: - [E.g., Resume text, employment history, education credentials] - [E.g., ___]
4.4 Output Data Categories Describe all outputs the system produces and how they are used: - [E.g., Applicant score (0–100); pass/fail flag; reason codes for rejection]
4.5 Data Sources and Provenance - Training data source(s): [Name sources; describe whether data reflects historical patterns] - Known biases in training data: [Describe any known or suspected biases and how they were addressed] - Ongoing data inputs in deployment: [Describe real-time or batch data feeds]
Section 5: Risk Identification
Use the table below to enumerate potential harms. Add rows as needed.
| Risk ID | Harm Description | Type | Who is Harmed | Root Cause |
|---|---|---|---|---|
| R-001 | [E.g., Qualified candidates from minority groups disproportionately screened out due to proxy variables] | Allocative | Job applicants; protected class members | Training data reflects historical underrepresentation |
| R-002 | [E.g., System assigns low scores to candidates with employment gaps, disproportionately affecting caregivers] | Allocative | Primarily women | Feature engineering; gap-penalty scoring logic |
| R-003 | [E.g., System fails to process non-English resumes accurately] | Quality-of-service | Non-native English speakers | Training data underrepresentation |
| R-004 | [Add additional risks] |
Section 6: Risk Scoring
Score each risk using the methodology from Section 5 of this guide (Likelihood × Severity × Population Exposure Multiplier).
| Risk ID | Likelihood (1–5) | Severity (1–5) | Population Multiplier | Risk Score | Risk Tier |
|---|---|---|---|---|---|
| R-001 | [L] | [S] | [P] | [L×S×P] | [Low / Moderate / High / Critical] |
| R-002 | [L] | [S] | [P] | [L×S×P] | [___] |
| R-003 | [L] | [S] | [P] | [L×S×P] | [___] |
Risk Tolerance Statement The organization's defined risk tolerance for AI systems in this use category is: [Low / Moderate / High]. All risks scored above [threshold] require escalation to [role/committee] and a documented treatment plan within [X] days.
Section 7: Mitigation Controls
For each risk rated Moderate, High, or Critical, document the controls in place and proposed.
| Risk ID | Existing Controls | Proposed Additional Controls | Control Owner | Implementation Date | Residual Risk Score |
|---|---|---|---|---|---|
| R-001 | [E.g., Fairness evaluation conducted pre-deployment; demographic parity analysis across 5 protected groups] | [E.g., Quarterly impact ratio monitoring; threshold adjustment if impact ratio falls below 0.80] | [Name / Team] | [Date] | [Score] |
| R-002 | [E.g., Gap-penalty feature removed from scoring model] | [E.g., Annual re-evaluation of feature importance using causal fairness analysis] | [Name / Team] | [Date] | [Score] |
| R-003 | [E.g., Multilingual resume parser integrated] | [E.g., Quality-of-service audit across language groups semi-annually] | [Name / Team] | [Date] | [Score] |
Control Framework Reference Controls are mapped to: [ ] NIST AI RMF [ ] ISO 42001 Annex A [ ] ALTAI [ ] Internal control framework: [Name]
Section 8: Algorithmic Discrimination Analysis
8.1 Pre-Deployment Bias Testing - Was the system tested for disparate impact or disparate treatment prior to deployment? [ ] Yes [ ] No - Testing methodology: [E.g., Disparate impact analysis using four-fifths rule; equalized odds evaluation; counterfactual fairness testing] - Protected characteristics tested: [List all tested] - Results summary: [E.g., Impact ratio for Race/Ethnicity: 0.86 (above 0.80 threshold). Impact ratio for Sex: 0.91."] - Testing conducted by: [ ] Internal team [ ] Independent third party (required for NYC LL 144) - Third-party auditor name (if applicable): [Name] - Audit report reference: [Document ID or URL]
8.2 Known Disparity Findings and Responses If any testing revealed a disparity (impact ratio below 0.80 or equivalent threshold): - Describe the disparity: - Steps taken to investigate root cause: - Mitigation measures implemented: - Residual disparity after mitigation (if any):
8.3 Post-Deployment Monitoring for Discrimination - Frequency of ongoing bias monitoring: [Monthly / Quarterly / Annually] - Metrics tracked: [E.g., Selection rate by protected group; complaint rates by demographic segment] - Trigger for escalation: [E.g., "If impact ratio for any group falls below 0.80, the system is suspended from use until root cause is identified and corrected"] - Escalation path: [Role → Role → Role]
Section 9: Human Oversight and Contestability
9.1 Human Oversight Mechanisms Describe all points at which a human reviews the AI output before it becomes a final decision:
| Decision Stage | AI Output | Human Review Required? | Reviewer Role | Override Authority? |
|---|---|---|---|---|
| [E.g., Initial screen] | Score + pass/fail flag | [ ] Yes [ ] No | [E.g., Recruiter] | [ ] Yes [ ] No |
| [E.g., Final selection] | Ranked shortlist | [ ] Yes [ ] No | [E.g., Hiring Manager] | [ ] Yes [ ] No |
9.2 Recourse Process Describe how affected individuals can challenge a decision: - Notification of adverse decision: [How and when notified; does notice include AI involvement?] - Right to request explanation: [Yes / No; How to request; response time] - Right to request human review: [Yes / No; Process; timeframe] - Formal appeal process: [Describe; include contact information to include in consumer disclosures] - Escalation to regulator: [Describe any regulatory complaint pathway provided to individuals]
9.3 Training of Human Reviewers - Have human reviewers been trained to recognize and correct AI errors? [ ] Yes [ ] No [ ] Planned - Training description: [Describe training content and frequency] - Documentation of training: [Reference location of training records]
Section 10: Transparency and Consumer Disclosures
10.1 Pre-Decision Notice Text of the notice provided to affected individuals before the AI system is used to evaluate them: > [Paste or draft the exact disclosure text here. Must include: purpose of the AI system; nature of the consequential decision; contact information; plain-language description; and how to access the deployer's public website statement.]
10.2 Adverse Decision Notice Text or process for notifying individuals of an adverse decision made or substantially influenced by the AI system: > [Describe or paste notice text, including: statement that AI was used; how the individual can request human review or an explanation]
10.3 Public Website Statement URL where the organization's public statement about this AI deployment is posted (required by Colorado § 6-1-1703(5)): > [URL or "Not yet published — planned for [date]"]
Summary of what the public statement covers: - Types of high-risk AI systems currently deployed: [Describe] - How algorithmic discrimination risks are managed: [Describe] - Nature, source, and extent of information collected: [Describe]
Section 11: Post-Deployment Monitoring Plan
| Monitoring Element | Detail |
|---|---|
| Monitoring frequency | [E.g., Monthly metrics dashboard; quarterly bias audit; annual full AIIA review] |
| Metrics tracked | [List: accuracy, selection rates by group, complaint volume, override rate, model drift indicators] |
| Responsible party | [Name / Team] |
| Escalation trigger | [Describe conditions that require system suspension or AIIA update] |
| Incident log location | [Document/system location] |
| AIIA update triggers | [Material system change; new regulation; post-deployment finding above risk tolerance] |
| Regulatory notification procedure | [Describe process for notifying Colorado AG, EU market surveillance authority, or other regulator within statutory deadline] |
Section 12: Sign-Off and Document Control
12.1 Assessment Team
| Name | Title | Role in Assessment | Signature | Date |
|---|---|---|---|---|
| [Name] | [Title] | Assessment Lead | ___ | [Date] |
| [Name] | [Title] | Legal Review | ___ | [Date] |
| [Name] | [Title] | DPO / Privacy Review | ___ | [Date] |
| [Name] | [Title] | Technical Review | ___ | [Date] |
12.2 Approval
| Name | Title | Approval Signature | Date |
|---|---|---|---|
| [Name] | [Title — must be senior responsible owner] | ___ | [Date] |
12.3 Version History
| Version | Date | Author | Summary of Changes |
|---|---|---|---|
| 1.0 | [Date] | [Name] | Initial assessment |
| 2.0 | [Date] | [Name] | Annual review |
| [___] | [___] | [___] | [___] |
12.4 Related Documents
- DPIA reference: [Document ID]
- Model card / dataset card: [Document ID or repository link]
- Bias audit report (if applicable): [Document ID]
- Developer documentation received: [Document ID]
End of Sample AIIA Template
9. Common AIIA Mistakes (Regulator Perspective)
Based on enforcement patterns from the NYC DCWP, guidance from the Colorado AG's office, the EU AI Office's published guidance, and analysis from the IAPP and Future of Privacy Forum, the following mistakes are the most common—and the most likely to attract regulatory scrutiny.
Treating the AIIA as a one-time exercise. Colorado requires reassessment annually and within 90 days of any substantial modification. Regulators look for version history and evidence that the assessment was genuinely updated, not re-dated.
Conflating a vendor's marketing materials with developer documentation. A deployer's obligation to conduct an AIIA requires actual system documentation—model cards, training data summaries, known limitations—not a sales sheet. If your vendor cannot provide this, that is itself a red flag requiring escalation.
Failing to test for intersectional bias. NYC LL 144's final rule explicitly requires bias audits to report intersectional categories (e.g., race × sex), not just protected characteristics in isolation. An AIIA that reports only single-axis fairness metrics will be inadequate for this jurisdiction and will miss the most harmful disparities in others.
Defining "affected population" too narrowly. Assessors frequently scope the impact analysis to direct users of the system and miss indirectly affected third parties. An AIIA for an insurance pricing model must consider not only policyholders but also individuals who may be denied coverage or priced out of the market as a result of the system's outputs.
Risk scoring without population scale. A 2% error rate sounds negligible until the system processes five million decisions per year, producing 100,000 incorrect outcomes. The AIIA's risk register must account for volume.
No consultation log. Regulators increasingly expect to see documented evidence that affected communities or their representatives were consulted. An AIIA completed exclusively by the internal technical team, with no external engagement, will draw scrutiny.
Inadequate recourse documentation. The right to contest an automated decision is a substantive legal right in every jurisdiction covered by this guide. An AIIA that describes recourse as "contact customer service" without a defined process, timeline, and escalation path does not satisfy the legal standard.
Relying on a DPIA as a complete substitute for an AIIA. A DPIA focuses on personal data processing risks. It does not systematically analyze algorithmic discrimination, societal harm, or the full range of fundamental rights affected by an AI system. EU Art. 27(4) is explicit that the FRIA "complements" the DPIA—it does not replace it.
10. How Long an AIIA Takes and Who Should Own It
The time and resource investment varies substantially by system risk level. The following estimates reflect real-world experience across organizations that have completed assessments under Colorado SB 24-205 and EU AI Act preparation programs.
| Risk Level | Typical Duration | Core Team Size | Key Activities |
|---|---|---|---|
| Low (internal tool, limited impact) | 1–2 weeks | 2–3 people | Document review; template completion; legal sign-off |
| Moderate (customer-facing, limited consequential decisions) | 3–6 weeks | 4–6 people | Bias testing; stakeholder interviews; legal and DPO review |
| High (consequential decisions at scale, sensitive data) | 6–12 weeks | 6–10 people | External bias audit; community consultation; board-level approval |
| Critical (fully automated decisions, vulnerable populations, large scale) | 3–6 months | Cross-functional team + external advisors | Full FRIA/DPIA integration; regulatory pre-consultation; third-party technical review |
Ownership Model
The AIIA should be owned by the business or product owner who is accountable for the deployment decision—not by the privacy team or the legal team alone. Those teams provide essential inputs, but ownership must sit with the person who controls the resource allocation and the go/no-go decision. In practice, organizations that assign AIIA ownership to legal or privacy teams find that assessments become compliance formalities divorced from system design decisions.
The EU AI Office's emerging guidance and the IAPP's AI Governance Profession Report both point to an emerging AI Governance Officer role as the appropriate institutional home for AIIA program management: maintaining the template library, tracking regulatory changes, scheduling reviews, and escalating findings to the board.
Automation and Tooling
Manual AIIAs at scale are not operationally sustainable. Several platforms now automate significant portions of the assessment workflow—inventory discovery, questionnaire routing, evidence collection, and regulatory framework mapping. Organizations managing large AI portfolios should evaluate tools in our /vendors directory and /best/eu-ai-act-compliance-tools, including:
- [Credo AI](https://www.credo.ai): Enterprise AI governance platform with policy packs pre-loaded for EU AI Act, NIST AI RMF, ISO 42001, and Colorado SB 24-205. Automates evidence collection and audit-ready reporting.
- [Holistic AI](https://www.holisticai.com): Full-stack AI governance platform with automated risk assessment, shadow AI discovery, and continuous monitoring.
- [Vanta](https://www.vanta.com): Trust management platform that supports ISO 42001 compliance workflows alongside established SOC 2 and ISO 27001 programs.
- [Saidot](https://www.saidot.ai): AI transparency and governance platform with public AI register functionality, suited for EU public-sector FRIA obligations.
- [Modulos](https://www.modulos.ai): AI governance and compliance automation platform focused on EU AI Act readiness and continuous model monitoring.
11. Frequently Asked Questions
Q1: Is the AIIA the same as the EU AI Act's Conformity Assessment?
No. The Conformity Assessment (CA) under the EU AI Act is conducted by or on behalf of the provider (developer) of a high-risk AI system, prior to placing the system on the market. It verifies that the system meets the technical requirements in Chapter III (data governance, technical documentation, transparency, human oversight, accuracy). The Fundamental Rights Impact Assessment (FRIA) under Article 27 is conducted by the deployer and focuses on the rights impacts of how the system is used in a specific context. An organization that builds and deploys its own system must complete both.
Q2: If we already have a DPIA, do we need a separate AIIA?
In most cases, yes—you need additional work beyond the DPIA. A DPIA focuses on personal data protection risks and is insufficient to satisfy the full scope of an AIIA or FRIA, which covers non-discrimination, autonomy, freedom of expression, access to justice, and other fundamental rights not addressed by data protection law. However, EU AI Act Art. 27(4) allows a single integrated document to satisfy both obligations if it covers all required content areas from both Art. 35 GDPR and Art. 27 EU AI Act. Colorado §6-1-1703(e) similarly allows cross-jurisdictional assessments to satisfy the Colorado requirement if they are "reasonably similar in scope and effect."
Q3: Our AI vendor says they've already completed an impact assessment. Can we rely on it?
Partially. A developer-completed assessment provides valuable inputs—technical documentation, training data summaries, known limitations, pre-deployment bias test results—and Colorado's developer documentation requirements under §6-1-1702 are specifically designed to make this information available to deployers. However, the developer's assessment is based on the system in the abstract; your AIIA must be grounded in your specific deployment context, your affected population, and your organizational controls. You cannot fully delegate the AIIA obligation to the vendor.
Q4: How does the AIIA interact with NYC Local Law 144?
NYC LL 144 is narrower than a general AIIA: it applies only to automated employment decision tools (AEDTs) used for hiring or promotion decisions affecting NYC residents, and its core requirement is an independent third-party bias audit that calculates impact ratios for sex and race/ethnicity categories (including intersectional categories). A complete AIIA for a covered AEDT should incorporate the LL 144 bias audit report as the algorithmic discrimination analysis section (Section 8 of this template), append the bias audit summary for public posting, and document the candidate notification process.
Q5: What are the penalties for non-compliance?
Penalties vary by jurisdiction. Under Colorado SB 24-205, violations constitute deceptive trade practices under the Colorado Consumer Protection Act, with penalties up to $20,000 per violation enforced by the state Attorney General. Under the EU AI Act, FRIA non-compliance by deployers can result in fines up to €15 million or 3% of worldwide annual turnover for non-compliance with obligations, and up to €35 million or 7% of worldwide annual turnover for violations of prohibited AI practices. NYC LL 144 enforcement began July 5, 2023, with the DCWP empowered to investigate complaints and impose civil penalties. In Canada, failure to comply with the Directive on Automated Decision-Making carries departmental accountability obligations; the proposed AIDA (Artificial Intelligence and Data Act), if enacted, would introduce criminal penalties for serious harms.
Sources
- Colorado SB 24-205 — Official Bill Text, Colorado General Assembly
- Colorado Revised Statutes § 6-1-1703 — Deployer Duty to Avoid Algorithmic Discrimination, Justia Law
- EU AI Act, Article 27 — Fundamental Rights Impact Assessment for High-Risk AI Systems, artificialintelligenceact.eu
- Algorithmic Impact Assessment Tool — Government of Canada, Treasury Board Secretariat
- Automated Employment Decision Tools — NYC Department of Consumer and Worker Protection
- Enforcement of Local Law 144 — Office of the New York State Comptroller (December 2025)
- Data Protection Impact Assessments (DPIAs) — UK Information Commissioner's Office
- Fundamental Rights and Algorithms Impact Assessment (FRAIA) — Government of the Netherlands
- Assessment List for Trustworthy AI (ALTAI) — European Commission High-Level Expert Group on AI
- NIST AI Risk Management Framework (AI RMF) — National Institute of Standards and Technology
- ISO/IEC 42001 Annex A Controls Explained — ISMS.online
- ISO 42001 Annex A Control A.6 — AI System Life Cycle, ISMS.online
- AI Governance Behind the Scenes: Emerging Practices for AI Impact Assessments (2025 Update) — Future of Privacy Forum
- AI Governance Profession Report 2025 — IAPP and Credo AI
- DPIA vs. FRIA: 5 Key Differences Explained — AI Act Blog
- Colorado AI Act (SB 24-205) Compliance Guide — TrustArc
- AI Conformity Assessment and GDPR DPIA: A Comparison — CRANIUM