40 AGs Warn Against “Sycophantic” AI Outputs

by Zachary Barlow

December 16, 2025

It’s no secret that AI has a “dark side.” The headlines are rife with examples of AI encouraging illegal or harmful behaviors and the resulting tragedies. Now, forty state attorneys general are demanding action. The National Association of Attorneys General issued a letter to thirteen AI companies, including Anthropic, Google, and Microsoft. In it, the AGs warn companies against “sycophantic” AI outputs, which it defines as “when an artificial intelligence model single-mindedly pursues human approval.” While seeking human approval may seem like a positive thing, sycophantic behaviors can quickly spiral out of control. This is especially true of users with pre-existing mental health conditions and children. The AGs argue that the harms resulting from sycophantic outputs can violate multiple state laws:

“Our states have laws with civil and common law requirements: (1) to warn users of applicable risks, (2) to avoid marketing defective products, (3) to refrain from engaging in unfair, deceptive, or unconscionable acts and practices, and (4) to safeguard the privacy of children online. We are concerned that you are violating those laws by allowing widespread sycophantic and delusional GenAI outputs. In addition, many of our states have robust criminal codes that may prohibit some of these conversations that GenAI is currently having with users, for which developers may be held accountable for the outputs of their GenAI products.”

The AG’s letter asks the recipients to commit to a 16-point plan for combating sycophantic outputs by January 16, 2026. The plan itself calls for increased safety guardrails, including disclaimers and third-party review. Companies utilizing chatbots should be cautious. The bot’s terms and conditions should make clear that it should not be used in an unauthorized manner. Utilizing disclaimers, rate limiting, and limiting the contextual memory of your chatbot may help reduce these risks.