Risk Management: 2026 CISO AI Risk Report

by John Jenkins

February 2, 2026

Cybersecurity Insiders recently published its 2026 CISO AI Risk Report, which is based upon the results of a survey of 200 CISOs and other security leaders. This excerpt sets forth the report’s key findings:

– 71% of CISOs say AI has access to core business systems, but only 16% govern that access effectively. These agents have access without governance.

– 92% of organizations lack full visibility into AI identities, and 95% doubt they could detect misuse if it happened. AI is already acting in environments most security teams can’t monitor.

– 86% don’t enforce access policies for AI identities. Only 17% govern even half of their AI identities like human users, and just 5% feel confident they could contain a compromised agent.

– 75% have discovered unsanctioned AI tools currently running in their environments, often with embedded credentials or elevated system access that no one is monitoring.

– Only 25% use AI-specific identity or monitoring controls. Most organizations are trying to manage machine-speed risk with fragmented tools designed for manual workflows and human users, not autonomous agents.

The report goes on to provide some of the details behind these findings.  For example, when it comes to AI access, 83% respondents expressed concern about AI access to core systems, with nearly half indicating that they have already observed AI agents exhibit unintended or unauthorized behavior.  Even more ominously, the report says that “a third of organizations dealt with an actual security incident or near-miss in the past year.”