AI Risk Management: Best Practices for “Humans in the Loop”
by
June 18, 2025
Last month, Zach blogged about a Debevoise article on the role of human oversight in AI risk management – a.k.a. having a “human in the loop.” One of the insights in that article that I thought made it worth revisiting was its advice that in some cases, it’s best to have a human “over the loop,” while in others, it’s best to keep a human “out of the loop”:
Human-Over-the-Loop: Some AI outputs do not require human review for each associated decision or use, which is usually what is meant by “human-in-the-loop.” Often, it is sufficient to allow the AI to make decisions on its own, but provide for human monitoring and spot checking of those decisions to ensure that the AI is behaving as expected, which is sometimes referred to as “human-over-the-loop.” An example of this kind of decision is whether a credit card purchase was fraudulent and the credit card should be disabled to avoid further fraud. This is usually a decision made by AI, which a human can quickly override if they can confirm that the AI’s decision was a false positive.
Human-Out-of-the-Loop: There are some circumstances where the AI should prevail over human decisions, which is referred to as “human-out-of-the-loop.” This usually arises in situations where there is a need for quick decisions to prevent significant harm and where machine decision-making is viewed as superior or there is a strong possibility that human decision-making is impaired. Examples include an AI-based cybersecurity detection tool that prevents a human from emailing out a malicious attachment that contains malware or factory machinery that shuts down if the AI monitoring system detects that its human operator is falling asleep or otherwise impaired.
The blog also says that procedures for human oversight of AI content and decision-making should be assessed periodically in order to help ensure that they remain efficient and effective as circumstances change.