Colorado Lawmakers Float Rework of AI Law
by
April 6, 2026
Earlier this year, I wrote about the upcoming deadlines for Colorado’s AI Act. The 2024 law was initially set to enter force in February 2026. However, concerns over compliance burdens saw the law postponed until June 30, 2026, pending a rework. That rework is finally here as Colorado’s working group released its proposed rewrite last month. A Fisher Phillips memo summarizes all the proposed changes:
- “Narrower scope – but employment is still squarely covered. The proposal focuses on “covered ADMT,” which is automated decision-making technology that “materially influences” a consequential decision. Employment and employment opportunities are explicitly included in the definition of consequential decisions, alongside housing, credit, education, insurance, health care, and essential government services. Routine scheduling, administrative routing, and workflow management are carved out.
- Common AI tools are off the hook. Spell-check, calculators, spreadsheets, robocall filters, and general-purpose large language models like ChatGPT are excluded, as long as they aren’t specifically configured or marketed for use in consequential decisions.
- Upfront notice to applicants and employees. Employers using covered AI tools in hiring or employment decisions must provide clear, conspicuous notice to job applicants and employees that AI is being used. This can be satisfied through a public-facing notice (such as a link or posting reasonably near the point of interaction) rather than individualized disclosures at every touchpoint.
- Post-adverse outcome disclosures. When an AI-assisted decision results in an adverse outcome (such as a rejection, a termination, or a denial of an opportunity), the employer must provide the affected individual within 30 days with a plain-language explanation of the AI’s role, the categories of data the system used, instructions on how to request correction of inaccurate personal data, and information on how to request human review.
- The right to human review. Workers and applicants who receive an adverse AI-assisted decision can request meaningful human review and reconsideration “to the extent commercially reasonable.” That human reviewer must have actual authority to override the decision, must be trained for the role, and cannot simply defer to the system’s output.
- Shared liability between developers and deployers. One of the most contested issues in the original law was who bears responsibility when AI goes wrong. The proposed rewrite splits liability based on relative fault. Developers are responsible for harms that arise from their systems being used as intended. Employers (as “deployers”) are responsible for their own independent decisions, including using AI in ways the developer didn’t intend or authorize. Indemnification clauses that would shift a party’s own liability to the other are void as against public policy.
- Enforcement stays with the AG – no private lawsuits. The Colorado Attorney General has exclusive enforcement authority. There is no private right of action under this law. Violators get a 90-day cure period before the AG can seek civil penalties, unless the violation was knowing or repeated.
- Effective date pushed. Finally, as noted, the effective date of the law would shift to January 1, 2027.”
This rewrite is designed to make compliance easier for AI companies by streamlining some of the law’s more burdensome requirements. Previous reworks have been advanced and shot down as no consensus was reached. However, if these amendments fail, then the law will enter force as written in June. This puts significant pressure on the legislature to reach an agreement before the end of their legislative session in May.