A variety of advocacy groups are putting pressure on Colorado to thoroughly implement the state’s first-in-the-nation law to try to curb discrimination via AI systems.
The law, approved last session, won’t take effect until 2026. It aims to set guardrails on how companies and government entities can use artificial intelligence to make key decisions over people's lives, in areas as varied as banking and health care to government services.
A task force is scheduled to issue recommendations to lawmakers in February on how to implement it. It’s likely there will be additional legislation to modify that law and tackle other aspects of AI, introduced throughout the upcoming legislative session, which starts in January.
Advocates for consumers, workers, social justice, and privacy have joined together on a letter outlining key provisions they want to see maintained in the law. That includes using a broad definition for AI systems and giving the Colorado Attorney General authority to issue rules interpreting and clarifying the law.
The groups, including the American Association of People with Disabilities, the Consumer Federation of America, ACLU of Colorado, and the Electronic Privacy Information Center among others, wrote that the goal is to make it as hard as possible for companies to evade the law.
“It’s also critical that the law builds on—and does not undermine—existing civil rights and consumer protections under Colorado law,” states the letter.
One key question for some of these groups is what role an AI system must play in these high-stakes decisions in order to be covered by the law.
“I think that if the AI system could influence the outcome of the ultimate decision, it should be covered,” said Grace Gedye, a policy analyst with Consumer Reports, which also signed the letter.
Gedye points to a 2023 New York City law that requires employers who use AI technologies in their hiring to conduct independent “bias audits” on some software tools and share them publicly. According to a study from the Citizens and Technology Lab at Cornell University, narrow definitions in the law mean that compliance so far has been low, because companies have generally determined their AI tools aren’t covered.
Low compliance is something backers of increased protections would like to avoid with Colorado’s law. To that end, advocates are focused on transparency and enforcement. Colorado would require people to be informed if an AI system is used to make a decision, but the groups are pushing to make sure those disclosures are clear.
“We are concerned that the explanation and the notice that consumers get before the decision is made won't be detailed enough for consumers to really understand how the decision will be made,” said Gedye.
Advocates also want to expand the law to include a private right of action that would allow people to sue if they believe an AI tool has been misused, to their harm, and to let District Attorney's offices also conduct investigations.
Democratic Rep. Brianna Titone was one of the main sponsors of the bill and serves on the task force working on its implementation. She spoke to CPR News before seeing the letter but said the law was just a first step and that backers had to make compromises to get it passed.
“We understand there are ways of making the policy better and (adding) more accountability, but we needed to balance the benefits of the bill with the opposition we were going to see, and have seen, and still continue to see from the tech companies who want less regulation,” she said.
Titone said she views Colorado’s approach to addressing the challenges posed by AI as the first of many stages, but said it’s important to get something on the books that has some effectiveness.
“We're trying to be strategic and be good negotiating partners with all of the folks involved, to be sure that we're giving that sense of calm and that the unintended consequences that they're saying are going to happen are not going to happen.”
Colorado’s AI task force has heard from top experts in the industry, businesses that are using AI, academics and non-profits about the challenges and issues and how to set up safeguards. One of the biggest sticking points will be on the definitions within the law, to determine which AI systems are covered and how the technology is used to evaluate people.
Governor Jared Polis has expressed his desire that this law, and any others the legislature approves, not impede innovation. He also wants the implementation to focus on those using AI to intentionally discriminate.
“I am concerned about the impact this law may have on an industry that is fueling critical technological advancements across our state for consumers and enterprises alike,” said Polis when he signed the bill into law earlier this year.