Top AI Risks on the General Counsel’s Plate
by
February 24, 2025
A recent Lowenstein memo provides an overview of the major AI-related legal, compliance, and cybersecurity risks that should be on the GC’s radar screen. These include, among other things, the need to understand and identify where AI is being used, establish appropriate limits on its use by employees, address data quality, rights and confidentiality concerns, and address the implications of the growth of AI on cyber risk management. As this excerpt discusses, the rapidly evolving regulatory landscape and the resulting patchwork of domestic and international laws governing AI is one of the more significant risks that a GC must address:
In the U.S., regulation of AI at the federal level has been limited. Several agencies including the CFPB, FTC, and SEC have all issued rules and guidance regarding the use of AI or technologies of which AI is included, and have focused generally on AI adoption that is transparent and conspicuous. Guidance was also issued by the National Institute of Standards and Technology. Much more regulatory progress has been made on the state level, where several states have enacted AI-related legislation, and many more bills have been proposed.
The proposed and enacted bills vary widely in scope and obligations. Utah’s Artificial Intelligence Policy Act, for example, requires disclosure when using AI tools with customers. California recently enacted two AI laws that will take effect in January 2026 and require developers to be transparent about AI training data and offer AI detection watermarking tools. And the new Colorado AI law, which becomes effective in February 2026, requires developers and deployers of “high-risk artificial intelligence systems” to protect consumers from risks of algorithmic discrimination.
Internationally, countries are approaching AI governance variously via voluntary guidelines and standards, use-specific or comprehensive legislation, and national AI strategies. To mention just a few of these developments: In Europe, the European Union’s (EU) Artificial Intelligence Act became effective in August 2024. It has extraterritorial scope and applies to AI systems placed on the EU market or used in the EU by or on behalf of companies located throughout the world.
China has adopted multiple laws focusing on the use (as opposed to the development and deployment) of AI. Canada’s proposed Artificial Intelligence and Data Act aims to protect Canadians from high-risk systems and ensure the development of responsible AI. Singapore, on the other hand, is taking a sectoral approach and lets the respective authorities publish regulations and guidelines.
The memo says that although there are some common patterns to these regulatory efforts, there is no standard approach to AI regulation, and the legal landscape will further evolve as AI technology advances. That means that businesses, and their GCs, will need to stay informed about new developments and be prepared to adapt to new rules.