You're offline - Playing from downloaded podcasts
Back to All Episodes
Podcast Episode

Seventy-One Percent of AI Agents Access Enterprise Systems Without Governance

January 25, 2026

Audio archived. Episodes older than 60 days are removed to save server storage. Story details remain below.

A new survey of over two hundred CISOs reveals that while most organisations have AI tools accessing core business systems, only sixteen percent effectively govern that access. The findings expose a critical security blindspot as AI agents proliferate faster than the controls meant to manage them.

The Governance Gap Widens

A stark disconnect has emerged between the rapid deployment of AI agents and the security frameworks designed to govern them. The 2026 CISO AI Risk Report, published by Cybersecurity Insiders and Saviynt based on responses from two hundred and thirty-five chief information security officers at large enterprises, reveals that seventy-one percent of organisations now have AI tools accessing core business systems like Salesforce and SAP. Yet only sixteen percent have implemented effective governance over that access.

Visibility Crisis

The survey exposes troubling gaps in organisational awareness. Ninety-two percent of organisations lack full visibility into their AI identities, while ninety-five percent doubt they could detect or contain misuse if it occurred. Three-quarters of respondents reported discovering unsanctioned AI tools already operating within their environments, often embedded with credentials or elevated system access that no one is monitoring.

Shadow AI Spreads Unchecked

The proliferation of unauthorised shadow AI presents a particular challenge. Eighty-six percent of security leaders do not enforce access policies for AI identities, and only seventeen percent govern even half of their AI identities with the same rigour applied to human users. A mere five percent felt prepared to contain a compromised AI agent.

Nearly half of surveyed CISOs have already observed AI agents exhibit unintended or unauthorised behaviour, while a third dealt with an actual security incident or near-miss in the past year. Separate research from Netskope found that forty-seven percent of generative AI users still operate through personal accounts rather than organisation-managed tools.

Legacy Systems Cannot Keep Pace

Traditional identity and access management tools designed for human users are proving inadequate for autonomous AI systems operating at machine speed. The survey found sixty percent of organisations still use login-based authentication patterns for AI identities, approaches ill-suited for systems requiring API-first controls such as token lifecycle management and scope-limited authorisation.

Only twenty-five percent of organisations currently use AI-specific monitoring or controls. Security experts warn that AI agents present a distinct challenge because they can act with delegated authority, chain actions across systems, and quietly accumulate permissions as integrations expand.

Identity Becomes the Enforcement Layer

CISOs are responding by prioritising identity as the critical security control point. According to the survey, seventy-three percent would invest in API and workload identity discovery if budget allowed, while sixty-eight percent would focus on continuous monitoring and posture analytics. With the EU AI Act's high-risk system requirements taking full effect in 2026 and regulatory scrutiny increasing globally, the window for organisations to address AI governance gaps is narrowing rapidly.

Published January 25, 2026 at 9:32pm