SB 24-205, signed May 2024, is the first comprehensive US state law on AI. It takes effect 1 February 2026 and regulates "high-risk AI systems" used to make or substantially influence "consequential decisions" affecting Colorado residents — in areas like employment, housing, lending, insurance, healthcare, education, and legal services.
Who is in scope?
Both developers (those who build or substantially modify high-risk AI systems) and deployers (those who use high-risk AI to make consequential decisions) must comply. There are safe harbors and lighter obligations for small deployers, but the core duties attach at both ends of the supply chain.
What are the key duties?
Developers must provide documentation to deployers (intended uses, training-data summary, evaluation of discrimination risk, appropriate use). Deployers must implement a risk-management policy (a NIST AI RMF-aligned program is explicitly acceptable), complete impact assessments, give consumer notice, offer opportunities to correct data, and report algorithmic discrimination to the Colorado Attorney General.
How does it compare to the EU AI Act?
Narrower scope (limited to consequential decisions), no conformity assessment or CE-marking regime, and enforcement is by the Colorado Attorney General rather than a dedicated AI authority. But the core ideas — risk categorization, provider/deployer split, documentation duties, impact assessments — track the EU AI Act closely. Organizations doing EU AI Act work can reuse much of the output for Colorado.
What are the penalties?
Violations are treated as unfair trade practices under Colorado law. The Attorney General has exclusive enforcement. There is an affirmative defense if a company discovers a violation via internal testing or a certified AI risk management framework and cures it within a specified period.