• Original Reporting

The Trust Project

Original Reporting This article contains firsthand information gathered by reporters. This includes directly interviewing sources and analyzing primary source documents.

With about a week left in this year’s lawmaking session, a new bill was introduced late Monday that would change Colorado’s controversial artificial intelligence law — or at least its impact on small businesses. 

Senate Bill 318 would reduce the administrative tasks smaller companies must take to protect consumers against discrimination if their AI systems are used to decide who gets a job, housing, personal loans, health care, insurance coverage, educational opportunities, or legal or essential government services. 

The measure also would delay implementation for about a year and make the resource-intensive parts of the AI law apply initially to companies with 500 or more employees worldwide, instead of 50 or more. That would step down gradually until April 1, 2029, when companies with fewer than 100 workers would be exempt.

Without the revisions to Colorado’s current AI law, any company — even those that didn’t develop their AI — must complete risk assessments, notify consumers that AI is used to make critical decisions, and respond to consumers who appeal critical decisions influenced by AI. That law goes into effect Feb. 1. 

Senate Majority Leader Robert Rodriguez April 17, 2024, at the Colorado Capitol. (Olivia Sun, The Colorado Sun via Report for America)

“Drafting policy around AI is an uphill battle,” Senate Majority Leader Robert Rodriguez, a Denver Democrat and the bill’s prime sponsor, said in a statement. “As soon as you land on workable policy, you realize you’re two steps behind where the technology currently is.”

Senate Bill 318 builds on last year’s “first-of-its-kind legislation to implement commonsense guardrails, address concerns that I’m hearing from stakeholders, and ensure we’re keeping up with this evolving AI landscape,” he added. “The ultimate goal with this policy is to ensure that we’re protecting consumers that — whether they like it or not — are along for the AI ride.”

Consumer advocates were reluctantly OK with how the law ended up last year because it aimed to protect consumers from computer systems trained on biased data. 

But the changes in the bill were disappointing, said Matthew Scherer, who focuses on workers’ rights as senior policy council at Center for Democracy and Technology.

“Industry got nearly all of the changes it wanted, while public interest groups got only a fraction of what we wanted,” he said. “That said, while the bill strips the law down to its foundation, that foundation is still there and it’s still strong. Labor, consumer and civil rights groups are still processing, but I think there’s an understanding that the tech industry has spent a year trying to make an example out of Colorado and is feeling buoyed by their power in D.C., and this might be the best we can get right now.”

But for AI developers and pretty much any company that might use AI, including technology they didn’t develop, there’s still a big burden on small businesses. Some of it is just delayed, said Chris Erickson, cofounder of venture capital firm Range Ventures in Denver.

“The first few years gives them some relief from some parts of the bill but that does ramp up pretty significantly. And over time, you are still left with companies of a pretty small size having to implement a bunch of these things,” Erickson said.

Additionally, he said, “What we’ve seen here is just an expansion of some of the already problematic things. I would love for all of us to be in a place where we have a piece of legislation that allows businesses, especially local businesses, to operate as efficiently as possible and also starts to lay down some guardrails that could be a great bipartisan framework for the rest of the United States to follow. But we’re not there yet, unfortunately.”

Why a law not yet in effect is getting revised

The measure would change Senate Bill 205, which faced an outcry from the tech and business community when it was passed by the legislature last year and signed into law by Gov. Jared Polis. 

☀️ READ MORE

Some said it would hurt AI development in Colorado and companies would go elsewhere since no similar law exists in the U.S. Others complained about the heavier burden on small companies and tech startups, who along with the big guys must notify consumers when AI is used for critical decisions and provide explanations to consumers when asked. 

They also took issue with the term “deployers,” which is any company that uses their own AI or someone else’s to make consequential decisions. They could be held liable if the AI decision was in one of eight types of covered categories: education, employment, financial services, health care, housing, insurance, legal services and essential government services.

The pushback caused Polis, along with Rodriguez and state Attorney General Phil Weiser, to pursue revisions. A legislative task force was created and volunteers from the business and consumer advocate community began meeting in August to find compromises. They didn’t find much, which led Rodriguez to spend weeks drafting an update. 

Some of the region’s top technology leaders were unhappy with the proposed revisions. Bryan Leach, CEO of Ibotta, the Denver tech company with a popular consumer retail app, was among 200 technology executives who reached out to Polis last year with criticism of the law. 

Bryan Leach, founder and CEO of Ibotta in Denver (Handout)

Leach said he had been heartened last summer to see Polis and Rodriguez commit to revising the bill to address “overly broad definition of AI,” and “proactive disclosure requirements” that hurt young and existing tech companies, jobs and raising capital in the state. They’d said in a letter that a goal was to “align with what’s happening at the federal level instead of creating a “state-by-state patchwork of regulation.” 

That did not happen with Senate Bill 318, Leach said. 

“Since then,” Leach said in an email, “multiple other states have declined to implement bills like this, and we remain alone in our approach. Unfortunately, we learned last night that the reform bill now being proposed by Sen. Rodriguez turns its back on the public commitments the Senator made in his signed letter of June 13, 2024. In fact, the bill substantially heightens the costs and administrative burdens on small businesses. If passed, this bill will only exacerbate the damage to our reputation as a business friendly state and our ability to continue to create jobs.”

He feels that with the public only getting about a week to see what was in the bill, it shouldn’t have been introduced.

“Instead, we urge the legislature to delay implementation of the original bill for one year, to allow time for a full and fair consideration of whether to implement, revise, or repeal the original AI bill,” Leach said.

Proposed changes to AI law

The bill clarifies the definition of “algorithmic discrimination,” as AI that makes a decision that violates any local, state or federal anti-discrimination law. The bill added a new term, “principal basis,” to clarify that this law only affects AI systems that “make consequential decisions without meaningful human involvement.”

And it emphasized that technologies where the AI isn’t a substantial factor in consequential decisions are excluded. Some of those were cited in the current law, which doesn’t consider tools like spell check, generative AI systems like ChatGPT or video games as capable of making such decisions. 

But the revisions are complex for an already lengthy law that seems to favor the business community, which was Polis’ intent when he asked for changes last year. 

The bill strips out some of the language the industry felt was vague, like the requirement to use “reasonable care” to protect consumers from algorithmic discrimination.

It proposes to limit what consumers could appeal to only decisions “based on incorrect personal data or unlawful information” or adverse decisions that are not limited by deadlines or competition, like a job offer that is no longer available.  

It narrows what “housing” and “financial or lending services” mean. Housing decisions would only refer to a person’s primary residence. Financial or lending services decisions only cover personal and household financial services, according to the bill.

Also exempt are technology startups that have raised less than $10 million from third-party investors, post revenues below $5 million and have existed for less than five years.

Nonexempt companies would have to create a risk-management policy that spells out potential discriminatory risks and how they would mitigate them, and update it annually. Companies must let consumers know in a statement how decisions are made and what personal information may be considered by the AI system. 

Trade secrets, as before, are still protected.

It will still be up to the attorney general to set rules, investigate violations and enforce civil penalties of up to $20,000 per violation starting Jan. 1, 2027. Companies and developers that find and fix inadvertent violations affecting fewer than 1,000 customers would not be liable.

Response from the community

Online, members of the Rocky Mountain AI Interest Group mostly cheered the changes that would impact small companies. But many were quick to mention that while it’s an improvement from the existing law, it still has cumbersome features and is “difficult to gauge how the bill will actually function.” 

“Overall, SB 25-318 softens the original law while preserving targeted consumer protections. It strikes a more balanced approach between innovation and accountability,” wrote Dan Murray, the organization’s founder who led a group last year to testify at the statehouse against the original AI law. “Colorado continues to lead the national conversation on how to regulate AI thoughtfully.”

Vivek Krishnamurthy, a professor at the University of Colorado Law School who sat on the legislative task force to help the sides find a compromise, said the new bill seems to address all the concerns brought up by the business side. 

It added new categories of businesses that are based on the number of employees, company revenues and how many decisions their high-risk system might make. That allows a staggered rollout of consumer disclosures and required risk assessments for companies of all sizes that use AI in some way to make critical decisions about consumers. 

It removed the mention of “reasonable care” that required developers to avoid “reasonably foreseeable risks  of discrimination,” he said. And unless they are asked, companies no longer must notify the AG if they find known risks. 

“I see nothing in my initial read that makes this a win for people who want more algorithmic accountability,” Krishnamurthy said. “If you believe there should be no regulation of AI whatsoever, I’m glad to see that if this passes, there will still be a law that governs the use of high-risk artificial intelligence systems in Colorado.” 

Reading through an explanation of the bill, Rodriguez essentially kept to his original premise that the law is to add guardrails so companies aren’t overly reliant on a technology system that could decide a person’s fate unfairly.

“This is a quintessential compromise bill,” said Grace Gedye, a policy analyst at Consumer Reports, which advocated for more consumer protections in the bill. “I suspect no one feels like they are getting everything they want here.”

The bill has about one week to get through the legislature, which ends its current session on May 7. The first committee has not yet been scheduled as of Tuesday.

Colorado Sun reporter Jesse Paul contributed to this story.

☀️ READ MORE

Tamara Chuang writes about Colorado business and the local economy for The Colorado Sun, which she cofounded in 2018 with a mission to make sure quality local journalism is a sustainable business. Her focus on the economy during the pandemic...