AI Risk Management: Does Your Program Need A New Approach?
by
March 30, 2026
A recent Boston Consulting Group publication says that programs for managing AI risks need to be revamped. Most of these programs assumed a small number of AI deployments that were “managed centrally, released carefully, and governed by a standard process.” Yeah, those were the days, huh? Agentic AI, vibe coding and the all-around breakneck pace of AI use cases and deployments have shattered that paradigm. Here’s what BCG says should replace this traditional approach:
The resolution to this dilemma is a new approach: different AI uses have different risk profiles and require different levels of review and mitigation. The insights that teams have gained from prior use cases are not lost. They are codified in a knowledge base of risks and successful guardrails and mitigations.
This knowledge base has two primary benefits:
- New teams can turn to this playbook to learn how identical or similar risks were successfully managed in the past.
- It can serve as a central control point to build an inventory of AI uses. This critical inventory will allow companies to understand their AI exposure and supply chain risks if new vulnerabilities are discovered or regulatory actions are imposed.
In this approach, most requests are handled swiftly by applying proven guardrails and mitigations. But high-stakes, novel, or unproven applications, especially those involving agents, receive extra attention and expertise.
Risk management becomes an ongoing organizational capability built around speed for most cases and thorough diligence for novel uses where the risk is material. Rather than being a pesky, check-the-box activity, risk management promotes innovation and quality. It adds value rather than friction.
BCG argues that building this kind of knowledge base will allow companies to move from what is currently an inefficient, ad hoc process to one that is more streamlined and structured, and better understood. It contends that, when implemented, this approach will allow the sponsoring team to answer a short series of questions about the proposed use of AI that are specifically tied to a potential risk. The answers to these questions will guide next steps by enabling the organization to assess the inherent risk of the AI use and then automatically route the request based on risk level. In turn, the level of risk will determine the degree of oversight required for the proposed use.