Shadow AI: Why Is the Use of Unauthorized AI Tools Surging?

by John Jenkins

November 12, 2025

We’ve previously blogged about the problems associated with the surge in employee use of “Shadow AI”.  This excerpt from a recent CIO.com article goes farther, and discusses some of the reasons why Shadow AI usage is growing so fast:

The rapid rise of shadow AI reflects not rebellion but accessibility. A decade ago, deploying new technology required procurement, infrastructure and IT sponsorship. Today, all that’s needed is a browser tab and an API key. With open-source models like Llama 3 and Mistral 7B running locally and commercially available LLMs on demand, anyone can build an automated process in just minutes. The result is a silent acceleration of experimentation happening well outside formal oversight.

Three dynamics drive this growth. First, democratization. Generative AI’s low entry barrier has turned every employee into a potential developer or data scientist. Second, organizational pressure. Business units are under visible mandates to use AI to enhance productivity, often without a parallel mandate for governance. Third, cultural reinforcement. Modern enterprises prize initiative and speed, sometimes valuing experimentation more than adherence to process. Gartner’s Top Strategic Predictions for 2024 and Beyond warns that unchecked AI experimentation is emerging as a critical enterprise risk that CIOs must address through structured governance and control.

This Cybernews article adds one more dynamic to the pile – it appears that managers are inclined to look the other way when their employees use Shadow AI:

59% of employees admit to using AI tools that haven’t been approved by their employers. More interestingly, out of those using unapproved tools, 57% claim that their direct managers are OK with it and support it, and 16% claim their direct manager doesn’t care.

Why don’t those managers care about the use of unapproved tools? This excerpt suggests that it just may be because the company’s Big Kahunas are the worst offenders:

93% of executives and senior managers admitted to using unapproved AI tools at work. Managers, team leaders, and supervisors also used unapproved AI tools surprisingly often. This creates an interesting paradox – those who are supposed to set an example and prioritize company security seem to be the most irresponsible when it comes to AI use at work.