What Drives “Shadow AI?”

by Zachary Barlow

November 17, 2025

Last week, John wrote about “shadow AI.” Shadow AI refers to the use of AI within an organization that does not align with that organization’s AI policies. This can include the use of unauthorized AI tools or the use of vetted tools in an unauthorized fashion. Shadow AI is on the rise, and as John noted, C-suite executives are some of the worst offenders. A recent CIO Dive article offers some insight into why those at the top are violating their own AI policies:

“More than two-thirds of C-suite executives admitted using unapproved AI tools at work in the past three months. Of those, more than one-third used unapproved tools at least five times during the last quarter. More than half rated security and compliance as ‘challenging’ or ‘extremely challenging’ when implementing AI.”

Cybersecurity and compliance can be difficult for non-technical personnel to understand. Often, the use of shadow AI is seen as a minor transgression with no real consequences. Employees and executives alike may view AI policies as overly restrictive, and shadow AI as a means of “cutting the red tape” for efficiency’s sake. Keeping cybersecurity compliance simple and training personnel on AI security risks can help counter these challenges.

Ultimately, people resort to shadow AI because their internal AI offerings do not meet their needs. This is why those in charge of managing AI tools and policies need to seek feedback from end users. If members of an organization have to go outside the confines of their AI policy to get what they need, then that exposes the organization to unnecessary risk. Organizations can better manage their cybersecurity risks and reduce shadow AI use by meeting AI needs internally. This may include offering more AI tools or improving existing offerings.