AI Regulation

Colorado SB205: What HR Tech platforms need to know

Understand Colorado SB205’s new AI regulation and its impact on HR Tech.

Jeffrey Pole

Published on
September 13, 2024

On May 17, 2024, Colorado passed SB24-205, a significant new law focused on consumer protection for Artificial Intelligence systems. This law addresses risks posed by high-stakes areas including employment, and introduces new requirements for preventing algorithmic discrimination and bringing transparency to AI systems. For HR Tech platforms, the law brings a need for careful compliance, especially when their tools influence decisions about individuals.

AI/algorithmic discrimination under Colorado SB205

Colorado SB205 applies to AI systems considered “high-risk,” which are defined as systems influencing decisions in key areas including: employment, education, financial or lending services, essential government services, healthcare, housing, Insurance, legal services.

A system is classified as high-risk if it substantially influences decisions, though the law leaves room for interpretation. It broadly covers situations where an AI system either “makes” or is a “substantial factor” in decisions with material legal or similarly significant effects. This grey area may lead to varying interpretations as the law is enforced.

Discrimination categories

One of the primary focuses of SB205 is to prevent discrimination in decisions made by AI. The law protects individuals from discrimination based on numerous categories, including:

  • Age
  • Color
  • Disability
  • Ethnicity
  • Genetic information
  • Religion
  • English proficiency
  • National origin
  • Race
  • Reproductive health
  • Sex
  • Veteran status

Requirements for HR Tech platforms under Colorado SB205

HR Tech platforms, classified as ‘developers’ under the law, are required to meet several new obligations under SB24-205 if their AI systems influence decisions, such as those related to hiring.

Transparency information to employers

Platforms must provide clear and detailed documentation to employers who deploy their AI systems. This documentation should include:

  • The AI tool’s intended purpose and appropriate/inappropriate uses.
  • High-level information about the data used to train the system.
  • Descriptions of anti-bias testing and data governance procedures.
  • Steps taken to mitigate the risk of discrimination in future use.
  • Instructions for monitoring and managing the AI tool during deployment.

Public notices

Developers are also required to publish public notices on their website that outline:

  • The types of high-risk AI systems they have developed.
  • How the company manages risks associated with algorithmic discrimination in these systems.

Bias testing

Regular anti-bias testing is expected by developers to protect against discrimination risks and to give employers assurance. Ongoing testing feeds into the reporting requirements below.

Incident reporting

If an incident of algorithmic discrimination is detected, whether during live deployment or ongoing testing, developers must notify both the employers using the system and the Colorado attorney general within 90 days.

Requirements for employers under Colorado SB205

Employers who deploy high-risk AI systems must also adhere to specific transparency and compliance measures.

Public transparency notices

Employers are required to disclose information about the AI systems they use, including:

  • The types of high-risk AI systems currently deployed.
  • How the employer manages and mitigates risks related to algorithmic discrimination.
  • The data collected and used by the AI systems.

Consumer notices

Before a significant AI-driven decision is made, such as hiring or promotion, employers must notify the individuals involved. The notice must include a clear explanation of:

  • The purpose of the AI system.
  • How the AI is used to assist the decision.

Right to opt-out

Consumers must be provided with the option to opt-out of AI-based processes, allowing them to request human-based processes instead.

Adverse decision requirements

In cases where a decision negatively impacts an individual (e.g, job rejection), employers must provide:

  • A detailed statement explaining the reasons for the decision.
  • The role the AI system played in reaching that decision.
  • Information about the data processed by the AI system, including its sources.
  • Contact details for the employer.

Right to appeal

Consumers must also be given the right to appeal adverse decisions, with the option for human review where feasible. This ensures that AI decisions are subject to scrutiny and correction if necessary.

Impact assessments

Employers must conduct regular impact assessments of their AI systems at least annually and within 90 days of any major changes to the system. These assessments evaluate:

  • The system’s purpose, use cases, and benefits.
  • Whether the AI system poses any risks of discrimination and what actions have been taken to mitigate those risks.
  • The data used by the AI system, known limitations, and transparency measures.
  • Post-deployment monitoring efforts to ensure continued compliance.

In some cases, employers can use impact assessments conducted for other regulations, such as the EU AI Act, to meet these requirements. They may also hire third-party auditors to perform the assessments.

Risk management policy and program

Employers deploying high-risk AI systems must implement a comprehensive risk management policy and program. This policy must outline the principles, processes, and personnel responsible for identifying, documenting, and mitigating known or reasonably foreseeable risks of algorithmic discrimination.

The program should adhere to recognized industry standards, such as the latest version of the National Institute of Standards and Technology's (NIST) Artificial Intelligence Risk Management Framework, ISO/IEC 42001, or another nationally or internationally recognised AI risk management framework that meets or exceeds the requirements of SB24-205.

Incident reporting

Any incidents of discrimination discovered by the employer during AI system deployment must be reported to the Colorado attorney general within 90 days to the relevant authorities.

Timeline for Colorado SB205 compliance

Colorado’s SB24-205 was signed into law on May 17, 2024, and will officially go into effect on February 1, 2026. This provides HR Tech platforms and employers with some time to implement the necessary measures to ensure compliance with the law.

For more information about Colorado SB205, visit Colorado General Assembly.

How Warden can help with Colorado SB205 compliance

Warden AI simplifies SB205 compliance for HR Tech platforms by providing ongoing bias testing, monitoring and showcasing transparency.

By partnering with Warden, HR Tech platforms can also equip their employers to meet regulations with minimal effort through our in-platform audits and tools.

Schedule a demo to find out how Warden can help you and your customers comply with Colorado SB205.

Start building AI trust today