AI Call-Monitoring Poses More Legal Risks
by
June 26, 2025
Previously, I wrote about Gladstone v. Amazon Web Servs. and Turner v. Nuance Commc’ns, two pending cases challenging the ability of third parties to use AI to evaluate consumer calls. Now, another California case joins the list. Galanter v. Cresta Intelligence is a new class action filed earlier this month in California. The Plaintiff in Galanter alleges that the company recorded her call to a customer service line and fed that recording into AI training data without her consent, violating California’s Invasion of Privacy Act. With this type of privacy case on the rise, Fisher Phillips offers insights into how to understand and avoid legal risks. One key takeaway is that your vendors can expose you to liability:
“Third-party liability: Even if your business isn’t doing the recording, plaintiffs’ counsel are asking courts to treat your AI vendor as an independent ‘eavesdropper’ if it has the capability to use the data for its own commercial purposes regardless of whether it actually does. You might get dragged into the litigation under the theory that you aided and abetted this eavesdropper by implementing the AI technology.”
The memo recommends that companies audit their vendor relationships and amend contracts to tighten up data use provisions. Defendants in these cases often use boilerplate language in their call scripts stating that the call may be recorded. However, this language may be insufficient to communicate that the recordings are shared with third parties and used in AI training. Given the massive amount of calls handled by call centers and customer support operations, damages could be substantial if the Plaintiffs succeed. These cases are still in the trial phase, and the legal theories backing them aren’t fully proven. However, the potential risks may warrant a second look at what data you’re sharing with vendors and how they’re allowed to use it.