Contact Us
Services
Services
Crypto and Digital Trust
Crypto and Digital Trust
Schellman Training
Schellman Training
Sustainability Services
Sustainability Services
AI Services
AI Services
About Us
About Us
Leadership Team
Leadership Team
Corporate Social Responsibility
Corporate Social Responsibility
Careers
Careers
Strategic Partnerships
Strategic Partnerships

What You Need to Know About the Colorado AI Act

Artificial Intelligence | ISO 42001

Published: Sep 24, 2025

Colorado is leading the charge of U.S. AI policy with the Consumer Protections for Artificial Intelligence (SB24-205) law. This law, commonly referred to as the Colorado AI Act (CO AI Act), is the first enacted comprehensive state law regulating high-risk AI systems. Signed in May 2024, it sets a precedent for balancing innovation with consumer protection through requirements on transparency, accountability, and fairness.  

In this article, RockCyber, a strategic advisory and consulting firm specializing in AI governance, cybersecurity, and risk management, partners with Schellman, a leading compliance and certification firm, to explore what the Colorado AI Act means for organizations. Together, we look at key compliance considerations and explain how the ISO/IEC 42001:2023 and the NIST AI Risk Management Framework (AI RMF) can help organizations get ready for this law and other emerging AI regulations. 

What is the Colorado AI Act? 

Modeled in part on the EU AI Act, the CO AI Act applies to high-risk AI systems, meaning those used in consequential areas, specifically employment, housing, education, healthcare, insurance, legal, and financial services. These systems directly impact consumer rights, opportunities, or safety, such as hiring decisions, loan approvals, or insurance underwriting. 

The Act’s core focus is preventing algorithmic discrimination, defined as unlawful differential treatment based on protected characteristics (e.g., age, race, sex, disability). Consumers are defined as Colorado residents, and duties are assigned to both developers and deployers conducting business within the State of Colorado. 

Who Must Comply? 

If you develop or deploy high-risk AI in Colorado, the Colorado AI Act applies to you, unless you fall into a limited exemption, including: 

  • Small businesses with fewer than 50 employees that don’t train AI on their own data and share the developer’s impact assessment with consumers. 
  • Federally regulated systems already approved by agencies like the FDA, FAA, or FHFA, or those that meet equivalent or stricter federal standards. 
  • Federal research and contracts, such as work for the Department of Defense, NASA, or the Department of Commerce (unless the AI is used for employment or housing decisions). 
  • HIPAA-covered healthcare entities using AI for recommendations that still require provider action, provided the system isn’t classified as high-risk. 
  • Insurers and their vendors already subject to Colorado’s insurance AI regulations (C.R.S. 10-3-1104.9) 
  • Banks and credit unions examined by state or federal regulators under AI standards that are at least as strict as Colorado’s. 
  • Federal government systems, unless used for employment or housing decisions. 

Bottom line: The Act is designed to capture most organizations building or using high-risk AI in Colorado, while carving out small businesses and industries already under heavy federal oversight. 

Colorado AI Act Key Provisions 

For Developers 

  • Documentation & Disclosure
    Provide deployers with documentation on the AI system’s purpose, training data summaries, limitations, benefits, and risk-mitigation measures. 
  • Incident Reporting
    Notify the Colorado Attorney General and deployers of any risks of algorithmic discrimination within 90 days of discovery. 

For Deployers 

  • Risk Management
    Establish and maintain a risk management program. This should align with frameworks like ISO 42001 or NIST’s AI Risk Management Framework. 
  • Impact Assessments
    Conduct annual impact assessments (and after major updates) that evaluate performance, purpose, limitations, and potential harms, including discrimination. 
  • Consumer Notifications
    Inform consumers when AI is used in consequential decisions, explain the system’s role and data sources, and provide avenues for corrections, appeals, or human review. 

Shared Duties 

  • Reasonable Care
    Both developers and deployers must use reasonable care to prevent algorithmic discrimination. 
  • Public Transparency
    Both must publish accessible statements about the high-risk AI systems they build or use, along with measures to manage risks. 

Enforcement & Timeline 

The Act will be enforced by the Colorado Attorney General, with violations treated as consumer protection violations subject to civil penalties up to $20,000 per violation, depending on the age of the impacted consumers. A safe harbor provision offers reduced liability if organizations can show compliance with the Act’s requirements, providing legal protection through a rebuttable presumption of reasonable care. 

Following a recent special session, lawmakers extended the law’s effective date from February 1st, 2026 to June 30, 2026. This gives companies time to strengthen their AI governance practices and align with the Act’s requirements. 

How Does the Colorado AI Act Compare? 

In the U.S. and abroad, AI laws with similar requirements to the Colorado AI Act continue to emerge. Below are just a few key examples and how they relate to Colorado’s requirements: 

Law / Regulation

Jurisdiction

Status / Effective Date

Core Requirements

Parallels with CAIA

Colorado AI Act (CAIA)

U.S. (Colorado)

Enacted May 17, 2024; effective June 30, 2026

Requires risk management programs, annual impact assessments, and safeguards against algorithmic discrimination in high-risk AI (employment, housing, healthcare, etc.).

First enacted U.S. state AI law; mandates governance and fairness for consequential decisions.

EU AI Act

European Union

In force Aug 1, 2024; phased rollout through August 2, 2027

Risk-tiered rules: bans unacceptable AI, strict controls for high-risk systems, transparency for GPAI, fines up to 7% of global revenue.

Similar risk-based model; broader scope and stronger enforcement across all AI actors.

South Korea – AI Basic Act

South Korea

Enacted January 21, 2025; effective January 22, 2026

Establishes national AI governance, requires risk management for “high-impact” AI, mandates transparency, fairness, and accountability; includes rights protections for citizens.

Like the CO AI Act, extraterritorially regulates high-risk AI use cases and requires governance and oversight, but at a national scale.

Texas Responsible AI Governance Act (TRAIGA)

U.S. (Texas)

Enacted June 22, 2025; effective Jan 1, 2026

Bans manipulative, rights-infringing, or social-scoring AI, regulates government and limited private-sector AI use, requires responsible development.

Aligns on responsible AI principles and restrictions on high-risk practices in the public and private sectors.

California AB 2013 (AI Training Data Transparency Act)

U.S. (California)

Enacted Sep 28, 2024; effective Jan 1, 2026

Requires developers of generative AI to publish summaries of training datasets (sources, licensing, personal/synthetic data, modifications).

Complements the CO AI Act by mandating transparency and accountability in model training data.

Challenges and Considerations 

According to a report released by the cross-sector task force appointed to evaluate the Colorado AI Act, several implementation challenges have been identified: 

  • Definitions: Key terms such as “consequential decisions,” “substantial factor,” and “algorithmic discrimination” are not precisely defined, creating uncertainty about the scope of coverage. 
  • Overlap in requirements: The relationship between impact assessments and risk management programs is not fully specified, including when each must be conducted and how they interact. 
  • Resource considerations: Smaller businesses may face challenges if exemptions narrow, while larger organizations must reconcile disclosure obligations with the need to protect proprietary information. 
  • Enforcement scope: Questions remain regarding the contours of the “duty of care” and the Attorney General’s enforcement authority. 

Amendment SB25-318 attempted to clarify several of these areas, particularly definitions, exemptions, and compliance triggers, but did not advance. For now, organizations will need to plan compliance strategies under the Act’s current language. 

Using ISO 42001 and NIST AI RMF for Compliance Readiness 

ISO/IEC 42001:2023, the first certifiable international standard for Artificial Intelligence Management Systems (AIMS), and the NIST AI Risk Management Framework (RMF) are both explicitly referenced in the Colorado AI Act as recognized models for responsible AI governance.  

Though voluntary, these frameworks provide organizations with a practical roadmap to demonstrate “reasonable care,” implement responsible AI programs, and strengthen compliance with the CO AI Act and emerging AI laws worldwide. Alignment includes: 

Risk Management & Oversight 

  • CO AI Act requires lifecycle risk management programs for high-risk AI, with the Attorney General empowered to review policies and records. 
  • ISO 42001 operationalizes this with defined risk processes, monitoring, audits, and management reviews. 
  • NIST AI RMF structures continuous oversight through its four functions (Govern, Map, Measure, Manage), emphasizing post-deployment monitoring and incident response. 

Transparency, Documentation & Impact Assessments 

  • CO AI Act requires developers to publish statements on high-risk AI systems and risk controls, while deployers must notify consumers, explain system roles in consequential decisions, and complete annual impact assessments. 
  • ISO 42001 integrates documentation controls, retention guidance, and impact assessment requirements. 
  • NIST AI RMF prescribes transparency as a trustworthiness goal, requiring documented impacts, explainability, and regular review throughout deployment. 

Bias & Fairness 

  • CO AI Act imposes a duty of reasonable care to prevent algorithmic discrimination. 
  • ISO/IEC 42001 directs organizations to assess and document fairness impacts across the AI lifecycle. 
  • NIST AI RMF treats fairness as a core outcome, addressing data quality and bias considerations 

Beyond improving internal practices, certification in ISO/IEC 42001 and alignment with the NIST AI RMF address many of the Act’s core requirements and provide the demonstrable evidence regulators expect. Organizations that follow these frameworks can generate the records needed to show compliance with the Colorado AI Act, positioning themselves to qualify for its safe harbor protections, reduce liability exposure, and reinforce trust with regulators, customers, and stakeholders. 

Beyond the CO AI Act: Preparing for the Next Wave of AI Laws 

The Act’s requirements take effect in 2026, and more AI laws continue to advance across the U.S. and globally. The most reliable way to prepare is by building on two recognized pillars of responsible AI: ISO/IEC 42001 and the NIST AI Risk Management Framework. Together, they establish a defensible program that meets Colorado’s “reasonable care” standard while scaling to align with other emerging laws. 

ISO 42001 gives you a certifiable governance system: policies, roles, risk and impact assessments, documentation, monitoring, and continual improvement. NIST AI RMF adds practical depth through risk mapping, bias mitigation, transparency, monitoring, and incident response. Used together, they give organizations a structured yet flexible foundation to operationalize compliance. 

How to Prepare 

  • Collaborate across departments to integrate legal, technical, ethical, and business perspectives in your AI development and deployment processes. 
  • Catalog and classify all AI systems (built and bought), flag high-risk uses, and name owners. 
  • Standardize impact assessments before launch, annually, and after major changes, ensuring evidence is maintained. 
  • Test and monitor for bias, performance, and security; define incident playbooks and review triggers. 
  • Tighten vendor contracts to require disclosures, testing support, change notices, audit rights, and cooperation with your assessments. 
  • Centralize documentation so you can demonstrate “reasonable care” on request. 

Strengthening AI Governance with RockCyber and Schellman 

RockCyber provides gap analyses, policy development, and Virtual Chief AI Officer services to actionize AI strategy, governance, and compliance. Our RISE and CARE frameworks, cross-mapping the CO AI Act, ISO/IEC 42001, NIST AI RMF, and the EU AI Act, equip your organization with a defensible, strategic, and future-proof program. Contact us to start today. 

Schellman, as the first ANAB accredited Certification Body for ISO 42001, takes a holistic approach to assisting organizations with their AI governance, including coordination with our experts across software security, red teaming, and more to ensure we provide the most robust and reliable assessment and certification experiences across our AI services. Contact us to learn more.  

 In the meantime, discover other helpful ISO 42001, NIST AI RMF, and AI policy insights in these additional resources:  

RockCyber 

Schellman 

Introducing RISE and CARE: A New Era in AI Strategy and Governance 

ISO 42001: Frequently Asked Questions 

 

TRAIGA Compliance Countdown: Texas AI Law Playbook 

 

A Global Snapshot of AI Laws and How Compliance with ISO 42001 Can Help 

 

EU AI Act Compliance 

 

Understanding U.S. AI Policy: Executive Orders, the Big Beautiful Bill, & America’s AI Action Plan 

 

About the Authors

Danny Manimbo is a Principal with Schellman based in Denver, Colorado. As a member of Schellman’s West Coast / Mountain region management team, Danny is primarily responsible for leading Schellman's AI and ISO practices as well as the development and oversight of Schellman's attestation services. Danny has been with Schellman for 10 years and has over 13 years of experience in providing data security audit and compliance services. 

 

Sabrina Caplis is an AI Governance, Risk, and Security Consultant with RockCyber, where she advises organizations on strengthening AI risk management, compliance readiness, and the development of responsible and secure AI programs. She has helped design and implement AI governance and security initiatives for Fortune 100 and Fortune 1000 companies, building policies, frameworks, and enterprise-wide programs aligned with emerging global regulations and standards. In addition to program development, she supports clients with security and policy assessments, tool rationalization, business continuity, and vCISO operations. Active in the cybersecurity community, Sabrina serves on the boards of ISSA and ISACA Denver and is a contributor to the OWASP GenAI Security Project. She has published and spoken on AI and cybersecurity at national and international forums and contributes to global innovation initiatives as a part of the World Economic Forum’s Global Shapers Community. 

About Schellman

Schellman is a leading provider of attestation and compliance services. We are the only company in the world that is a CPA firm, a globally licensed PCI Qualified Security Assessor, an ISO Certification Body, HITRUST CSF Assessor, a FedRAMP 3PAO, and most recently, an APEC Accountability Agent. Renowned for expertise tempered by practical experience, Schellman's professionals provide superior client service balanced by steadfast independence. Our approach builds successful, long-term relationships and allows our clients to achieve multiple compliance objectives through a single third-party assessor.