Algorithmic Disgorgement: Understanding AI’s “Death Penalty”

by John Jenkins

February 10, 2025

A recent Risk Management Magazine article notes that companies that train AI tools on consumer data collected without proper consent can face significant penalties from regulators. These penalties include “algorithmic disgorgement,” in which companies are compelled to delete specific datasets used to train AI models and, potentially, to delete the algorithm underpinning a key service.  The article points out that this risk isn’t a hypothetical one:

Several cases have already demonstrated the very real business implications of algorithmic disgorgement. In a notable 2019 case, the U.S. Federal Trade Commission (FTC) forced U.K.-based political consulting firm Cambridge Analytica to delete all of its algorithms and models developed using the data of millions of Facebook users it had harvested without their knowledge or consent.

In fact, the FTC has been quite active in this arena in recent years. In 2021, after finding Everalbum used user-uploaded photos to build facial recognition technologies, the FTC ordered the firm to delete its models and the underlying data because they collected the images without proper consent. The following year, the FTC ordered WW International (formerly Weight Watchers) to delete all algorithms trained on data collected through Kurbo, its weight-loss app targeted toward children, because it had obtained the information without proper parental consent.

Further complicating matters, disgorgement orders have typically had short compliance deadlines—sometimes only 90 days or less. This makes operational disruption a very real prospect.

The article says that while sectors like health care, financial services and law enforcement face the greatest risk of algorithmic disgorgement due to the sensitive nature of the data they hold and the harm they can cause, other industries that rely heavily on consumer data to train their AI models are also under regulatory scrutiny.  It goes on to provide recommendations on how companies can approach managing the risks of unconsented data and the algorithmic disgorgement remedy.