AI Risk Management: Tackling the “Shadow AI” Problem

by John Jenkins

August 28, 2025

A recent Jones Walker blog summarizing the results of IBM’s latest Cost of a Data Breach Report commented on the report’s findings concerning the growing problem of “shadow AI”:

The report documents “shadow AI” — AI tools that employees use without organizational knowledge or approval. One in five organizations reported a breach due to security incidents involving shadow AI, with breaches involving high levels of shadow AI adding $670,000 to the average breach cost compared to those with low or no shadow AI. This higher cost results from longer detection and containment times for these incidents, which took a week longer than the global average.

Shadow AI represents more than a security risk; it reveals how AI adoption can outpace organizational awareness and control. Breaches involving shadow AI were more likely to result in compromise of personally identifiable information (65%) and intellectual property (40%). This issue has displaced the security skills shortage as one of the top three most costly breach factors.

This CSO Online article acknowledges that shadow AI is surging, and that efforts to ban the use of unauthorized AI tools have proven to be futile.  The article says that the key to overcoming the problem of shadow AI use is effective organizational adoption of AI tools.  This excerpt discusses the first essential step in that process – user adoption of authorized tools:

The first phase is user adoption. It is also where the most critical missteps happen.

To succeed in this phase, leadership must offer employees access to AI in a way that is secure, supported, and aligned with policy. The goal is not training; it is personal utility. Can the tool summarize a document, draft an email, or extract key information effectively? If it can, users will adopt it organically. If it requires training, installation, or configuration, they will not. If no sanctioned tool is available, they will find their own. This is the foundational phase. Without broad, voluntary use of approved AI at the individual level, no enterprise AI strategy will gain traction.

The article says that many companies fail in this first phase because they don’t do a good job of platform selection.  They either don’t make a tool available or select one that just doesn’t meet the organization’s needs.  The article contends that the best approach is to select a general-purpose AI assistant designed for enterprise use. That tool needs to be simple to setup and access and provide immediate value across a range of business roles. It also must satisfy the business’s requirements for security, identity management, policy enforcement, and transparency.