AI Regulation: California Enacts Safety Standards for Companion Chatbots

by John Jenkins

October 30, 2025

Companion AI chatbots are intended to act as your pal or therapist, which given the current state of AI tools seems to me to involve a high risk of “creepy” behavior by the bot.  California legislators apparently have concerns along the same lines, and recently enacted Senate Bill 243, which mandates specific safeguards for companion chatbots.

This Jones Walker blog summarizes the legislation and notes that it imposes obligations relating to disclosure, content and safety protocols, and accountability.  This excerpt highlights the disclosure obligations imposed by the new law:

– AI Disclosure (General Users): If a “reasonable person” would be misled to believe they are interacting with a human, operators must issue a “clear and conspicuous notification” that the companion chatbot is artificially generated and not human.

– AI Disclosure (Minors): For users the operator knows are minors, operators must disclose that the user is interacting with artificial intelligence and provide clear and conspicuous notifications at least every three hours during continuing interactions reminding the user to take a break and that the chatbot is AI-generated.

– Suitability Warning: Operators must disclose on the application, browser, or other access format that companion chatbots may not be suitable for some minors.

Content and safety protocols required include crisis prevention protocols preventing the chatbot from producing suicidal ideation content and providing notifications that refer users who express suicidal ideation to a crisis service provider.  This crisis prevention protocol must be published on the chatbot operator’s website.  Operators also must take reasonable steps to ensure that minors are not exposed to sexually explicit content.  Finally, the statute imposes an annual reporting requirement that operators will have to comply with beginning in July 2027.