AI Risk Management: The Myth of the “AI Risk Owner”

by John Jenkins

March 17, 2026

This recent Crowe LLP blog says that companies engaged in an effort to tag a single “AI risk owner” are engaged in a fool’s errand.  Instead, an effective approach to AI governance needs to recognize that AI systems are built and used cross-functionally, and their risks should be managed cross-functionally as well.  In this environment, “shared accountability” needs to be more than a slogan, and this excerpt from the blog offers some specific suggestions on that tactics companies can use to achieve shared accountability in practice:

– Build an AI governance RACI [Responsible. Accountable. Consulted. Informed.] matrix. Clearly assign who is responsible, accountable, consulted, and informed for each category of AI risk, such as fairness, explainability, robustness, privacy, and security. If risk crosses a team boundary, assign the handoff.

– Use ISO/IEC 42001 to guide role assignments, including model governance and operational accountability. Make compliance provable with documented approvals, testing evidence, and monitoring records.

– Assign AI stewards in business units to own day-to-day accountability, not just escalation paths. Make stewardship real: approve use cases, enforce guardrails, and own outcomes.

– Form a cross-functional AI risk working group with quarterly risk reviews, clear authority to intervene when controls fail, and incident response plans. Define intervention up front and enforce it. Pause deployment, restrict use cases, require remediation, or pull a system from production. Set triggers for intervention (for example, drift thresholds, material errors, and privacy incidents) and name who can act.

– Implement release gates for AI. No gate, no launch.

If no one is empowered to say “stop,” governance is a theater. At the same time, if everyone can say “stop,” governance becomes the ultimate blocker. The point is to decide, in advance, who can stop what and under what conditions.