Avoiding The Risks of “AI Workslop”

by Zachary Barlow

November 5, 2025

The latest vernacular in AI is the new term “workslop.” This term refers to AI-generated content that is facially passable as work product, but deeply lacking in substance and accuracy. It’s a problem that is slowing down AI efficiency gains, as co-workers and management must expend time and effort to correct workslop, often erasing the time savings from the initial AI use. However, more than being a time-wasting inefficiency workslop is also a real risk. A recent Debevoise & Plimpton memo discusses workslop in the context of junior employees using AI to punch above their weight class, generating content that they themselves don’t have the expertise to fact-check. This can create real risks to an organization if it slips through the cracks:

“If, however, the senior employee does not identify the workslop, a significant risk exists that the senior employee will rely upon work product that facially looks accurate and even sophisticated, but is actually wrong or incomplete. If the senior employee relies upon the workslop to make an important decision internally or  shares it with a client, a firm could face reputational damage, as well as possible harm and legal liability.”

The memo also gives eight tips on how to avoid workslop and mitigate the associated risks through workplace policy and practices. One interesting note was the possibility of limiting cutting and pasting from AI outputs so that employees cannot use whole cloth AI generations as work product:

“Prohibit or Discourage AI Cutting and Pasting: Another option is to allow the use of AI for background research, but either prohibit or strongly discourage their employees from cutting and pasting AI into any document that is used for work. Rather, they are required to draft their work product from scratch, only using the AI to generate ideas or to lead them to trusted sources.”

Companies should review their AI use policies and ensure that provisions are in place to combat workslop. The memo notes that while most AI policies assign ownership of outputs to the user, along with the ultimate responsibility for any shortcomings, this may be insufficient to stop workslop. Instead, companies may need to dig deeper. Getting to the roots of how and why workslop is being generated and putting safeguards in place to prevent it.