AI Risk Management: Prepare for New State Regulation of High-Risk Uses

by John Jenkins

January 22, 2026

Wilson Sonsini recently published a list of AI regulatory developments that companies need to watch out for during the upcoming year.  As this excerpt explains, one of those developments is an increase in state efforts to regulate high-risk uses of AI tools:

For companies developing or deploying AI for consequential decisions (e.g., financial or lending services, education and employment opportunities, healthcare, housing, essential government services, or legal services), be prepared for new U.S. state laws regulating high-risk AI use. For example, under new regulations issued under the California Consumer Privacy Act, businesses that use “automated decision-making technology” or ADMT to make “significant decisions” about consumers must provide consumers with a pre-use notice, the ability to opt out of the use of ADMT, and access to information about the business’s ADMT use. Businesses must comply with these requirements beginning January 1, 2027.

In addition to California, the Colorado AI Act, currently slated to come into effect on June 30, 2026, will place substantial new responsibilities on AI developers and deployers, including requirements to undertake reasonable care to avoid algorithmic discrimination, to develop a risk management policy and program, to implement notices, and to conduct impact assessments, among other requirements. According to media reports, the law is expected to be the subject of debate during the legislative session this year and may change before it goes into effect in 2026.

How will President Trump’s recent Executive Order affect all of this state activity? Wilson Sonsini says that’s another issue that companies will need to keep an eye on this year:

Expect the Trump Administration and U.S. states to clash on AI regulations. As discussed in our previous client alert, the Trump Administration issued an Executive Order (EO) on December 11, 2025, that seeks to establish “a minimally burdensome national standard” on AI (AI EO). Among other directives, the AI EO instructs the U.S. Department of Justice to sue states over unconstitutional AI regulations and for the U.S. Secretary of Commerce to publish an evaluation of “burdensome” state AI laws within 90 days for referral to the Administration.

According to public statements by Trump Administration officials, including White House Special Advisor for AI and Crypto David Sacks who is assigned to play a critical role in the AI EO’s implementation, laws in California, New York, Colorado, and Illinois are in the crosshairs. However, because the AI EO cannot automatically void state laws or make them unenforceable, AI laws in these states remain in full force unless and until they are amended, repealed, or struck down through appropriate legal or administrative processes.