Colorado AI Act: What Insurers and Employers Need to Do Before June 2026
Colorado SB24-205 takes effect June 30, 2026. Here's what insurers and employers must do now on impact assessments, consumer notices, and AG enforcement.
By ACV Editorial · April 22, 2026 · 11 min read · Last reviewed April 22, 2026
Colorado AI Act: What Insurers and Employers Need to Do Before June 2026
On May 17, 2024, Colorado Governor Jared Polis signed Senate Bill 24-205 into law, making Colorado the first U.S. state to enact a comprehensive, risk-based AI consumer protection statute. Originally scheduled to take effect on February 1, 2026, the law's compliance obligations were pushed to June 30, 2026 when Governor Polis signed SB 25B-004 on August 28, 2025 — giving deployers and developers one additional quarter to finalize governance programs.
That delay is not a reprieve. Insurers and employers who have been waiting for rulemaking clarity should treat the revised date as a hard deadline, not an invitation to defer. The Colorado Attorney General's rulemaking process is ongoing, and enforcement authority is unambiguous.
This guide covers every material obligation under SB24-205 that insurers and employers must operationalize before June 30, 2026, how the law interacts with Colorado's earlier insurance-specific statute SB 21-169, and where the Colorado framework sits relative to the EU AI Act.
What the Law Actually Covers
SB24-205 — formally titled the Colorado Artificial Intelligence Act (CAIA) — targets high-risk AI systems: defined as any AI system that, when deployed, makes or is a substantial factor in making a consequential decision. The statute enumerates eight consequential decision domains:
- Employment: hiring, promotion, termination, compensation
- Education: admissions, scholarship eligibility
- Financial services: credit, lending terms
- Healthcare: diagnosis, treatment, coverage
- Housing: rental, mortgage eligibility
- Insurance: coverage, pricing, claims
- Government services: benefits eligibility
- Legal services: access to legal representation
The law applies to any company — regardless of where it is headquartered — whose AI system affects Colorado consumers in any of these domains. Jurisdiction follows the data subject, not the developer's address.
Two distinct roles carry distinct obligations:
- Developers build, substantially modify, or train high-risk AI systems and then make them available to deployers.
- Deployers use high-risk AI systems to make or substantially factor into consequential decisions about Colorado consumers.
The Insurer's Compliance Picture Is Already Complicated
Insurers face a layered regulatory environment that predates SB24-205. Colorado's Senate Bill 21-169, signed July 6, 2021, restricts insurers from using external consumer data and information sources (ECDIS), algorithms, and predictive models in a manner that results in unfair discrimination based on protected characteristics including race, color, national or ethnic origin, religion, sex, sexual orientation, disability, and gender identity.
The Division of Insurance implementing regulation — Regulation 10-1-1 — became effective November 14, 2023, and required life insurers to file a progress report by June 1, 2024, followed by annual compliance attestations beginning December 1, 2024. Private passenger auto and health insurance lines remain under active rulemaking.
SB24-205 does not replace SB 21-169 — it adds to it. Colorado carriers now operate under two separate regulatory regimes enforced by two separate regulators: the Division of Insurance (SB 21-169) and the Attorney General (SB24-205). The practical consequence is that an insurer's AI underwriting system faces both annual bias attestation requirements under Division of Insurance rules and annual impact assessment requirements under CAIA. Documentation standards are similar in structure but distinct in scope, and satisfying one does not automatically satisfy the other.
For insurers, the CAIA's definition of algorithmic discrimination is broadly construed. Disparate impact based on protected characteristics — even when those attributes are not explicitly coded into the model — constitutes algorithmic discrimination under the statute. This is a critical distinction from traditional disparate treatment frameworks: actuarially defensible models that produce disparate outcomes may still require remediation and disclosure.
Six Core Obligations for Deployers
1. Implement a Risk Management Policy and Program
Every deployer of a high-risk AI system must implement a documented Risk Management Policy and Program aligned with a nationally or internationally recognized framework. The statute explicitly identifies the NIST AI Risk Management Framework and ISO/IEC 42001 as qualifying frameworks. Formal ISO 42001 certification is not required, but documented alignment is. The policy must cover:
- How the organization identifies high-risk AI systems in its portfolio
- How it monitors those systems for algorithmic discrimination
- The governance escalation path when discrimination is detected
- Sensitivity and volume of data processed by each high-risk system
This policy must also be publicly available — a requirement that has no direct parallel in most other U.S. AI frameworks. The organization's public-facing statement must describe what high-risk systems are in use, how discrimination risks are managed, and the nature and source of data being processed.
2. Conduct Impact Assessments — Before Deployment, Annually, and After Modifications
The centerpiece compliance obligation is the impact assessment. Deployers must complete an initial impact assessment for every high-risk AI system, then repeat it at least annually and within 90 days of any intentional and substantial modification. The assessment must document:
- System purpose, intended use cases, and deployment context
- Categories of input data and output data
- Known or foreseeable risks of algorithmic discrimination and mitigation steps
- Performance metrics and known limitations
- Post-deployment monitoring procedures
- Whether customization data was used
Impact assessments must be retained for at least three years following final deployment and provided to the Attorney General upon request within 90 days. The Attorney General's office has rulemaking authority over assessment format and content standards — organizations should monitor that rulemaking as it progresses.
Platforms including Credo AI and Holistic AI have built structured assessment workflows specifically mapped to SB24-205's impact assessment requirements, including bias testing, documentation templates, and evidence bundles designed to support regulatory examination.
3. Provide Consumer Notice Before Consequential Decisions
Before a high-risk AI system makes or substantially contributes to a consequential decision about a consumer, the deployer must provide the consumer with:
- Notice that a high-risk AI system is being used
- The purpose of the system and the nature of the decision
- A plain-language description of the system
- Contact information for the deployer
- How to access the deployer's public transparency statement
Notice must be delivered at or before the time of the decision. The exception is narrow: if AI use would be obvious to a reasonable person (e.g., an explicitly labeled automated chatbot), disclosure is not required. The obvious-AI exception does not extend to most insurance underwriting or employment screening contexts.
4. Provide Adverse Decision Explanations and Appeal Rights
When a high-risk AI system contributes to an adverse consequential decision, the deployer must provide:
- The principal reasons for the adverse decision
- The degree to which AI contributed to the outcome
- The types of data processed and data sources used
- An opportunity to correct inaccurate personal data
- A right to appeal, with human review when technically feasible
Human review exceptions are limited to cases where delay would pose a genuine safety risk to the consumer. Employers who use AI for hiring, performance reviews, or termination decisions must build structured appeal workflows into their HR processes before the effective date.
5. Disclose Discovered Discrimination Within 90 Days
If a deployer discovers that a high-risk AI system has caused algorithmic discrimination, it must notify the Colorado Attorney General within 90 days of discovery. There is no private right of action under CAIA — enforcement is exclusively with the AG — but the self-disclosure obligation means deployers cannot quietly remediate and move on. The 90-day notification window begins from the moment of credible internal knowledge, not from the date of remediation.
6. Conduct Annual Reviews
Deployers must annually review each deployed high-risk AI system to confirm it is not causing algorithmic discrimination. This review obligation runs in parallel with impact assessments — the annual review is a standing monitoring commitment, not a one-time event.
Developer Obligations
Developers who provide high-risk AI systems to Colorado deployers carry symmetric but distinct obligations. Developers must:
- Provide deployers with comprehensive documentation including model and dataset cards, known limitations, foreseeable harmful uses, and the information needed for deployers to complete their own impact assessments
- Publish a publicly available statement summarizing high-risk systems they develop and how they manage discrimination risks
- Notify the AG and all known deployers within 90 days of discovering or receiving a credible report that a system has caused or is likely to cause algorithmic discrimination
For vendors selling AI underwriting models, hiring screening tools, or credit decisioning engines to Colorado deployers, the developer obligations create a contractual due diligence requirement on both sides of the transaction. Deployers will increasingly require model cards, dataset documentation, and discrimination testing evidence as conditions of vendor contracts.
Enforcement: AG Authority and the $20,000-Per-Violation Structure
The Colorado Attorney General has exclusive enforcement authority under CAIA. Violations constitute unfair trade practices under the Colorado Consumer Protection Act, carrying civil penalties of up to $20,000 per violation. Violations are counted per consumer and per transaction — an insurer using a non-compliant underwriting model across 50,000 Colorado policyholders faces aggregate exposure of up to $1 billion in theory, though actual enforcement has not yet tested that ceiling.
The law provides important safe harbors. A deployer that discovers a violation through its own monitoring or seeks feedback from affected users, cures the violation, and remains in compliance with a recognized framework (NIST AI RMF or ISO 42001) has an affirmative defense. Similarly, documented alignment with a recognized risk management framework creates a rebuttable presumption that the deployer used reasonable care — a meaningful evidentiary standard in any enforcement proceeding.
Colorado vs. the EU AI Act: Key Differences
SB24-205 was modeled in part on the EU AI Act, but the two frameworks diverge in material ways:
| Dimension | Colorado SB24-205 | EU AI Act |
|---|---|---|
| Risk classification | Binary: consequential decisions vs. not covered | Four tiers: prohibited, high-risk, limited-risk, minimal-risk |
| Scope | 8 consequential decision domains | Broader, including biometrics, law enforcement, migration |
| Prohibited uses | None — only regulates, does not ban | 8 categories of AI are prohibited outright |
| Penalties | $20,000/violation (per consumer) | Up to €35M or 7% of global turnover |
| Record retention | 3 years | 10 years |
| Enforcement | State AG only | National competent authorities |
| Public policy disclosure | Required | Not required (internal risk management system required) |
| GPAI/foundation model obligations | None | Yes — separate chapter for general-purpose AI |
Colorado is meaningfully narrower in prohibited practices but arguably more aggressive on transparency: the requirement to publish a risk management policy with no analog in EU law creates reputational and competitive stakes that purely internal compliance frameworks do not.
For organizations already building EU AI Act compliance programs, Colorado's impact assessment requirements overlap significantly with EU Article 27 fundamental rights impact assessments for deployers. A well-designed EU impact assessment can be adapted to satisfy Colorado's requirements with targeted additions — particularly around the NIST AI RMF alignment documentation and the AG notification provisions.
What Employers Must Do Now
Employers using AI for any employment decision — applicant screening, performance scoring, promotion ranking, termination risk assessment — face a concrete compliance checklist:
- Inventory all AI systems touching employment decisions in Colorado. This includes third-party vendor tools, not just internally built models.
- Classify each system: does it meet the CAIA definition of high-risk (i.e., does it make or substantially factor into employment decisions)?
- Negotiate vendor contracts to require developer documentation — model cards, dataset descriptions, discrimination testing results — sufficient to complete your own impact assessments.
- Complete initial impact assessments for all in-scope systems before June 30, 2026.
- Build consumer notice workflows: employees and applicants must be notified before AI is used in consequential decisions about them.
- Establish appeal processes with human review for adverse employment decisions.
- Publish a transparency statement on the company website describing high-risk AI systems in use.
Vendors including Holistic AI and Credo AI provide structured workflows for each of these steps, with documentation templates mapped to SB24-205's specific requirements. FairNow and Fairly AI focus specifically on bias detection in employment screening contexts, including disparate impact analysis and remediation reporting.
Key Takeaways
- The effective date is June 30, 2026 — pushed from February 1, 2026 by SB 25B-004, signed August 28, 2025. This is a hard deadline.
- High-risk AI is any system making or substantially influencing consequential decisions in 8 domains including employment and insurance.
- Both developers and deployers bear independent obligations; each must document, disclose, and monitor for algorithmic discrimination.
- Insurers face layered compliance: SB 21-169 (Division of Insurance) and SB24-205 (AG) run in parallel and require separate documentation programs.
- Impact assessments must be completed before deployment, annually, and within 90 days of substantial modification — retained for three years.
- Consumer notice and appeal rights must be operationalized in HR and insurance workflows, not just documented in policies.
- $20,000 per violation per consumer: aggregate exposure for widespread non-compliant deployment is substantial.
- Alignment with NIST AI RMF or ISO 42001 creates a rebuttable presumption of reasonable care — the closest thing to a compliance safe harbor the law provides.
- The Colorado AI Act framework page tracks regulatory developments and rulemaking updates as they occur.
Sources
- Colorado SB24-205 — Official Bill Text (Colorado General Assembly) — https://leg.colorado.gov/bills/sb24-205
- SB 25B-004 — Postponement of Colorado AI Act Implementation (Akin Gump) — https://www.akingump.com/en/insights/ai-law-and-regulation-tracker/colorado-postpones-implementation-of-colorado-ai-act-sb-24-205
- A Deep Dive into Colorado's Artificial Intelligence Act (National Association of Attorneys General) — https://www.naag.org/attorney-general-journal/a-deep-dive-into-colorados-artificial-intelligence-act/
- Colorado's Landmark AI Law Coming Online (Brownstein Hyatt) — https://www.bhfs.com/insight/colorados-landmark-ai-law-coming-online-what-developers-and-deployers-should-know/
- Colorado Compels Insurers to Audit AI Underwriting for Algorithmic Discrimination (Jurvantis) — https://jurvantis.ai/colorado-compels-insurers-to-audit-ai-underwriting-for-algorithmic-discrimination/
- Colorado SB21-169: 8 Things You Need to Know (Credo AI) — https://www.credo.ai/blog/colorado-sb21-169-8-things-you-need-to-know-about-colorados-new-ai-insurance-regulation
- New AI Compliance Requirements for Colorado Employers (Rocky Mountain Employer Blog) — https://www.rockymountainemployersblog.com/blog/2025/12/5/new-ai-compliance-requirements-prohibit-discrimination-for-colorado-employers
- Colorado AI Act vs EU AI Act Comparison (CO-AIMS) — https://co-aims.com/blog/colorado-ai-act-vs-eu-ai-act-us-companies
- Colorado SB 24-205 Compliance Guide (Stack Cybersecurity) — https://stackcyber.com/posts/ai-colorado-laws
- Colorado SB21-169 Solution — Holistic AI — https://www.holisticai.com/colorado-sb21-169
Keep reading
Frameworks
EU AI Act GPAI Obligations Explained: What Foundation Model Providers Must Do
Articles 53 and 55 of the EU AI Act impose layered obligations on GPAI model providers. Here's what applies, to whom, and when enforcement kicks in.
Frameworks
NIST AI RMF vs ISO/IEC 42001: Which Should You Adopt First?
NIST AI RMF is a flexible US risk framework; ISO 42001 is a certifiable international standard. Here's how they differ, overlap, and how to sequence both.
Frameworks
The EU AI Act Compliance Checklist for 2026
A practical 20-item checklist covering risk classification, GPAI obligations, high-risk system requirements, conformity assessment, and fines under the EU AI Act.