Who Watches the AI Watchmen?

by Zachary Barlow

May 27, 2025

Managing AI risks often turns out to be a very human process. To guard against errors in AI outputs and ensure that systems are functioning as intended, companies often use human oversight as a core part of their risk management practices. A recent Debevoise & Plimpton memo discusses various methods used to oversee and mitigate AI risks. When it comes to human review in particular, Debevoise recommends reviewing your reviewers:

“Implement compliance tools to help ensure that any required human review is both actually happening and effective. For example, companies can require that the human reviewer take active steps to affirmatively acknowledge that relevant citations have been checked (e.g., implementing checkboxes for each cite that, only once complete, enable the user to copy/paste the output). Companies can also record the length of time between checks to facilitate compliance reviews (e.g., automated flags are raised when a user completes checkboxes at a rate that is not consistent with meaningful human review of the referenced data). Some businesses are also routinely taking a small sample of human-approved AI decisions and having them reviewed again by an independent reviewer to assess error rates.”

Your AI tools are only as good as the people reviewing them, and that human oversight is only as good as your risk management policies and practices. Using humans to mitigate AI risks provides accountability and establishes clear responsibility when something goes wrong.