AI Regulation: California Becomes First in Nation to Regulate Developers of Large AI Models

by John Jenkins

October 8, 2025

On September 29, Gov. Gavin Newsom signed California’s  Transparency in Frontier Artificial Intelligence Act into law.  The Act is the first statute adopted in the United States to provide a regulatory framework governing the development and use of large AI platforms, and imposes a variety of obligations on entities developing high-capacity AI models.

The Act applies to “frontier developers,” which it defines as an entity that initiates or conducts training of a frontier model. In turn, the term “frontier model” is defined in such a way as to limit the statute’s application to large and complext AI models.  This excerpt from a recent Robinson + Cole blog provides an overview of its requirements:

– Frontier AI Framework – Large frontier developers must publish and maintain a documented framework outlining how they assess and mitigate catastrophic risks associated with their models. The framework may include risk thresholds and mitigation strategies, cybersecurity practices, and internal governance and third-party evaluations. A catastrophic risk is defined as a foreseeable and material risk that a frontier model could contribute to the death or serious injury of at least 50 people, or cause over $1 billion in property damage, through misuse or malfunction.

Transparency Reporting – Prior to deploying a new or substantially modified frontier model, developers must publish on their websites a report detailing the model capabilities and intended uses, risk assessments and mitigation strategies, and involvement of third-party evaluators. For example, a developer releasing a model capable of generating executable code or scientific analysis must disclose its intended use cases and any safeguards against misuse.

Incident and Risk Reporting – Critical safety incidents must be reported to the Office of Emergency Services (OES) within 15 days. If imminent harm is identified, an appropriate authority, such as a law enforcement agency or public safety agency, must be notified within 24 hours. For instance, if a model autonomously initiates a cyberattack, the developer must notify an appropriate authority within 24 hours. Developers are also encouraged, but not required, to report critical safety incidents pertaining to foundation models that are not frontier models.

The Act also incorporates whistleblower protections requiring developers to, among other things, notify employees of their rights as whistleblowers and establish anonymous reporting mechanisms.  It also imposes civil penalties of up to $1 million per violation, which may be enforced by California’s AG.

A recent Forbes article on the statute characterizes it as “sending shock waves through the industry” and this excerpt from that article summarizes its potential impact:

California is home to Silicon Valley, today’s nexus of artificial intelligence innovation. What happens in Sacramento rarely stays there. This AI law could establish a new baseline for AI standards across the United States.