Revise, delay or implement? The standoff over Colorado’s AI discrimination law

Colorado Capitol dome in Denver
Hart Van Denburg/CPR News
The Colorado Capitol dome in Denver, April 25, 2025.

Nine months: that’s all the time left before companies have to start complying with Colorado’s first-in-the-nation anti-discrimination law for AI systems, unless policymakers act.

Business and industry groups have been begging for a delay; they say the law as it stands is unworkable — they’re urging Colorado’s lawmakers to give all sides more time to try to find a compromise.

But consumer rights advocates say AI’s rapid spread into more and more areas of life makes it critical to put guardrails on how the technology is working. Many advocates for the law also feel some in the tech industry won’t be satisfied with anything other than a full capitulation on the policy’s most meaningful consumer protections.

And after a dramatic end to the legislative session, when the Senate Majority Leader introduced and then pulled what was intended to be a compromise bill, there’s an impasse on how to move forward. Business groups and some universities and schools don’t want to discuss any policy tweaks right now. They want Colorado lawmakers to push back the implementation date first.

The process to revise the law began the moment it was signed, but so far hasn’t panned out 

Over the past year, a task force comprised of business and tech, labor, consumer, and privacy experts tried to develop compromise policies around the various elements of the AI law. The goal was to introduce revisions in the 2025 legislative session. But the session came and went without lawmakers approving anything.

A business-led effort to delay the law’s implementation by roughly a year also failed. Now Gov. Jared Polis says he backs a federal moratorium on all state AI laws, which would effectively make Colorado’s law moot.

Colorado’s law applies to education, banking, hiring and healthcare; companies and other entities must notify people when an AI system is being used in decisions about their lives. It also lets people correct data and appeal a potentially adverse AI decision to a real person. Entities using AI would have to publish the types of AI systems they use and how they manage any known risks. The Colorado Attorney General’s office would enforce any violations, but the law doesn’t give people a new private right of action to sue.

Margot Kaminski is a University of Colorado law professor who focuses on AI law. She served on the state’s AI impact task force, and said while there’s been a lot of bipartisan energy nationally around data privacy and AI, Colorado’s approach is different because it's an anti-discrimination law. 

“It's about using technologies in ways that discriminate against people on the basis of their membership in a historically protected class and legally protected class,” she said. 

“It obliges companies to do some risk mitigation around having systems that are potentially discriminatory… And the right to appeal an AI decision, to my knowledge, is the first time that this has been enacted in the country,” added Kaminski.

When he signed the original bill, Polis asked lawmakers to focus their regulations on the software developers who create AI systems, rather than small companies that use them. Polis also warned that, without changes, the law could be used to target those employing AI even when its application is not intentionally discriminatory. 

Task force member Vivek Krishnamurthy, a CU law professor who focuses on technology law and AI system regulations, said that throughout the months of meetings, there was always a fundamental disagreement among members about whether or not a law like Colorado’s was needed. 

He characterized the industry’s opposition as “‘we have anti-discrimination laws already. Why do we need something here?’”

Krishnamurthy, though, is on the side that believes specific policies are necessary. He doesn’t think current anti-discrimination laws, which are based on the prejudices a human might hold against someone else, apply to automated systems that humans build. 

“They are built in particular ways and given historic data to train upon. And unless we have some transparency into how the systems are built, some transparency into when they're used, and some rules of the road for how you build these things, how you test them, and how you monitor them, it seems very unlikely to me that the existing laws that we have that prohibit discrimination are going to be effective.”

Krishnamurthy said by the time the task force wrapped up months of discussions, it hadn’t even really addressed the thorniest issues on compliance and disclosure about AI system use. 

“There was a lot of fear about just the scope of the application of the law that derailed some of the conversations that we could have had about how to tailor different parts of it in ways that might be less burdensome, to sort of increase the runway for companies to comply over time.”

Business and other government entities push back on scope of law 

The Colorado Technology Association was a key negotiator on the task force and said they support the intent of the law but not how vaguely it’s written and its broad scope. 

“We have been pushing for changes to our serious concerns with this law for the past year, and it's very unfortunate we're here at this stage and having to talk about, how do we get back together this summer to extend this date,” said CTA President Brittany Morris Saunders. 

She also emphasized that companies using AI are already trying to comply with current anti-discrimination laws. "No employer is going to deploy a system unless it is not resulting in bias or discrimination."

Opponents have gained some potentially influential allies in the form of Colorado’s institutions of higher education as well as K-12 schools, who say they weren’t involved in discussions about the policy and have asked for a delay.

In two different letters to lawmakers, the education community warned the law could be costly to comply with — and might touch their students in unintended ways.

University of Colorado Boulder campus seen from Flagstaff Mountain Road
Hart Van Denburg/CPR News
The University of Colorado Boulder campus seen from Flagstaff Mountain Road, Oct. 30, 2024.

“If we had been consulted as stakeholders, we would have noted that the law could limit the ability of our students to embrace new technology in the classroom and then launch their careers in Colorado,” states a letter signed by CU, the Colorado Community College System, Colorado State University, University of Northern Colorado, Colorado Mesa University, Colorado School of Mines, MSU-Denver and Western Colorado University.  The letter said the law would stifle research and innovation and put faculty, students and graduates at a disadvantage compared to their peers in other states.  

Separately, some K-12 school groups said compliance would put pressure on already strained school budgets and create costly and unexpected problems for teachers and students.

While the question of whether historic biases are creeping into AI systems is increasingly urgent for supporters of the law, as more and more entities use the technology to help them sift through data and resumes, many opponents note AI isn’t new, and has been used for years.

Chris Erickson, a co-founder of Range Ventures, a Colorado-based early stage venture capital fund that focuses on local startups, uses a restaurant as an example of one problem he sees with the new law — it empowers people to question any decision in which AI was used, even if the technology worked properly. 

“You've put out a job for a hostess or something, you get a hundred applicants. You only hire one. Now 99 people get a right of explanation as to why they didn't get the job. Ninety-nine people could appeal the decision for a job that's already been filled. And so that's not really at all about AI, that's really about disclosures and hiring and a bunch of other things that actually have nothing to do with underlying technology.” 

211130-SKI-BRECKENRIDGE
Hart Van Denburg/CPR News
A “help wanted” sign in the front window of the Crepe Cart on Main Street in Breckenridge, November 30, 2021.

The right to question decisions involving AI also troubles Bryan Leach, the founder and CEO of Denver based Ibotta, a mobile tech company. He says, as the law is written, many companies would be on the hook for things far out of their control or knowledge. 

“Even if they didn't develop the software, even if they're just a restaurant or a plumbing company, they have to get an individualized explanation of what data was considered regarding [the person], all the different steps that were taken in mitigation to make sure that this disparate impact didn't happen."

Some in the tech industry, like Erickson, also think the law’s definition of AI is too broad, because it includes data crunching and sorting programs. He’d prefer a narrower definition that’s limited to generative AI, the kinds of programs that can create content and chat with people. 

“I'm not saying that we are against being thoughtful about putting in some guardrails right now as this technology is being developed. A notification to people that they are interacting with an AI system where they expected to interact with a human, I actually think that that's a thoughtful place to start with this,” he said.

However, Colorado’s law is much broader, and narrowing it to something like Erickson is suggesting is a non-starter for consumer and labor advocates. 

“We're not opposed to any changes, we just want to make sure that the core protections for consumers aren't weakened, or loopholes or exemptions aren't added that would make the bill toothless,” said Kara Williams, a law fellow with the Electronic Privacy Information Center. Her organization has been involved in negotiations. 

‘There needs to be greater transparency’

Matt Scherer served on the state AI task force and leads the workers' rights project at the Center for Democracy and Technology. He feels labor and consumer groups have been negotiating in good faith and already made a lot of industry concessions. He said efforts to delay Colorado’s law and push for a federal moratorium on state AI laws makes him doubt the industry's willingness to accept any regulations. 

“That kind of just shows that there's not any desire for there to be meaningful guardrails or accountability on AI.”  

CPR spoke with representatives from two large global corporations several months ago who said they believed their companies could comply with Colorado’s law as written, but generally, business opponents have had the loudest voices in this debate, leaving the defense of the law mostly in the hands of consumer advocates and progressive policymakers. 

Scherer said as is often the case in policy, the loudest voices in the room often drive the discussion. 

“We are deeply concerned about the widespread use of AI in decisions that are affecting untold numbers of workers and consumers, and that there needs to be greater transparency around how companies are using these tools in those types of decisions.”

Democratic Senate Majority Leader Robert Rodriguez has been leading the negotiations on the AI law. After the session ended without any progress, he said he was committed to further discussions. 

Sen. Robert Rodriguez stands at the lecturn at the front of the state Senate chambers
Jesse Paul/Colorado Sun
Senate Majority Leader Robert Rodriguez, D-Denver, addresses the Senate on the first day of the 2025 legislative session on Wednesday, Jan. 8, 2025, at the Colorado Capitol in Denver, Colorado.

He’s faced criticism from the business community for dropping his bill so late in the session, and a lack of willingness to extend the law’s implementation deadline or fully address their concerns. But at the end of the session, Rodriguez pushed back on how some opponents have framed it, and said the law wouldn’t punish entities that are making an effort to comply. 

“At the core of the bill, it was just care, and try” to prevent harm from AI, he said. “And that's the frustrating part that gets lost.” 

Rodriguez told CPR News that while he did feel he made compromises, his priority is to the public, not companies or the tech industry. 

“I have a line of being the only person in the country with this policy, and I don't want to be the person that sets up a model for the country that does not protect consumers,” he said.