A new study reveals that while AI adoption surges across enterprises, most CISOs lack visibility and specialized tools to secure these systems, relying instead on legacy controls and facing critical skills shortages.
Security leaders are facing a critical blind spot as artificial intelligence systems proliferate across enterprise environments, with most organizations lacking the visibility, tools, and expertise needed to properly secure these increasingly autonomous systems.
A comprehensive survey of 300 US CISOs and senior security leaders reveals that AI adoption has outpaced security controls, creating dangerous gaps in enterprise defense postures.
The Visibility Crisis
The fundamental challenge begins with basic awareness. AI systems rarely exist in isolation—they're integrated across cloud platforms, identity systems, applications, and data pipelines. With ownership scattered across multiple teams, centralized oversight has effectively collapsed.
67 percent of CISOs report limited visibility into how AI is being used across their organizations.
Even more concerning, none of the surveyed security leaders claimed full visibility into AI deployments. Instead, they acknowledged varying degrees of unmanaged or unsanctioned AI usage throughout their environments.
This lack of visibility creates a cascading problem. Without knowing where AI systems operate or what resources they can access, security teams cannot effectively assess risk. Basic operational questions remain unanswered:
- Which identities do AI systems rely on?
- What data can they reach?
- How do they behave when security controls fail?
Skills Gap, Not Budget Gap
Despite AI security being a regular topic in boardrooms, the study reveals that financial constraints are not the primary barrier. CISOs identified their top obstacles as:
- Lack of internal expertise (50 percent)
- Limited visibility into AI usage (48 percent)
- Insufficient security tools designed specifically for AI systems (36 percent)
Only 17 percent cited budget constraints as a primary concern.
This skills shortage is particularly problematic because AI systems introduce behaviors that traditional security teams are still learning to assess. These include:
- Autonomous decision-making capabilities
- Indirect access paths that bypass conventional controls
- Privileged interactions between systems
- Dynamic behavior patterns that change over time
Without specialized expertise and active testing, organizations struggle to determine whether existing controls are functioning as intended in AI contexts.
Legacy Controls Carrying the Load
In the absence of AI-specific security tools and practices, enterprises are extending existing security controls to cover AI infrastructure. The study found that 75 percent of CISOs rely on legacy security controls—including endpoint, application, cloud, and API security tools—to protect AI systems.
Only 11 percent reported having security tools designed specifically to secure AI infrastructure.
This approach mirrors patterns seen during previous technology shifts, where organizations initially adapt existing defenses before more tailored security practices emerge. While this provides basic coverage, controls built for traditional systems may not account for how AI changes access patterns and expands potential attack paths.
The Path Forward
The findings suggest that AI security challenges stem from foundational gaps rather than a lack of awareness or intent. As AI becomes core to enterprise infrastructure, organizations must focus on:
- Building specialized expertise in AI security assessment
- Improving visibility into AI deployments and usage patterns
- Developing or acquiring tools specifically designed for AI infrastructure security
- Establishing active testing methodologies for AI systems
The report underscores that securing AI requires a fundamental shift in how security teams approach risk assessment, moving beyond traditional control validation to understanding autonomous system behaviors and emergent risks.
For organizations looking to address these challenges, the full AI and Adversarial Testing Benchmark Report 2026 provides deeper insights into the data and practical recommendations for closing these critical security gaps.

Featured image: Main featured image for the article...

Comments
Please log in or register to join the discussion