Businesses in 2026: AI Security Becomes Critical Priority as Assessment Rates Double
#Cybersecurity

Businesses in 2026: AI Security Becomes Critical Priority as Assessment Rates Double

Regulation Reporter
6 min read

A World Economic Forum survey reveals that 64% of organizations now assess AI security risks before deployment, up from 37% last year, as 94% of leaders identify AI as the primary driver of cybersecurity change in 2026.

The security landscape for artificial intelligence has shifted dramatically over the past year. According to the World Economic Forum's Global Cybersecurity Outlook 2026, the percentage of organizations implementing methods to identify security risks in their AI tools has nearly doubled, jumping from 37% in 2025 to 64% in 2026. This rapid adoption of AI security assessments reflects a growing recognition among business leaders that AI vulnerabilities represent a unique and escalating threat.

Featured image

The survey, published ahead of the WEF's annual Davos meeting, shows that 94% of respondents believe AI will be the most significant driver of cybersecurity change in 2026. More concerning, 87% believe that AI-associated vulnerabilities have increased, representing a greater threat than any other type of security risk. This perception aligns with real-world incidents covered throughout 2025, including widespread prompt injection attacks, AI code assistants that inadvertently made experienced developers less secure, and a high-profile December incident where Google had to address security issues created by its Gemini AI system.

Data Leaks and Adversarial AI Top Executive Concerns

For C-suite executives, data leaks from AI systems represent the primary fear, followed closely by the advancement of adversarial AI capabilities. The concern about adversarial AI is particularly relevant given that 64% of organizations report that geopolitical matters now play the biggest role in shaping their cyber risk strategies—a concern that has topped the list for consecutive years.

This geopolitical influence on cybersecurity planning shows significant variation based on organization size. Among organizations with more than 100,000 employees, 91% report that geopolitical considerations have changed their security plans. In contrast, only 59% of organizations with fewer than 1,000 employees cite geopolitical factors as a major influence on their cybersecurity strategy.

The connection between geopolitics and AI security is becoming more direct. Russian cyber operations, historically targeting major sporting events and critical infrastructure, demonstrate how political conflicts translate into cyber threats. With the FIFA World Cup scheduled for summer 2026, US organizations may need to prepare for politically-motivated cyberattacks, potentially including AI-enhanced attack methods.

Shifting Threat Priorities Across Roles

The WEF survey reveals notable differences in how various organizational roles prioritize cyber threats. For CEOs, cyber-enabled fraud such as phishing and social engineering remains the number-one concern, followed by AI vulnerabilities and software exploits. Notably, hacktivist threats don't register as a primary concern for CEOs.

Ransomware, which was the chief worry for organizations in 2025, has dropped out of the top three concerns for CEOs in 2026. Similarly, supply chain disruptions, which ranked third in 2025, no longer appear in the top three CEO concerns.

However, the perspective changes significantly when focusing on Chief Information Security Officers (CISOs). For security chiefs, ransomware remains the prime fear, holding the number one position. Supply chain attacks continue to rank second on CISO priority lists. This divergence suggests that while executives may be shifting focus toward AI-specific risks, security professionals maintain traditional threats as their primary concerns.

The Cyber Resilience Gap

The WEF emphasizes that the key to preventing worst-case outcomes is building cyber resilience—the ability to minimize the impact of a cyberattack that penetrates organizational systems. Despite growing awareness of AI security risks, there's a concerning gap between minimum compliance and actual preparedness.

According to the survey, 64% of respondents claim they meet minimum requirements for cyber resilience. However, only 19% believe they exceed baseline standards. This leaves a substantial portion of organizations operating at or near minimum compliance levels, potentially inadequate for sophisticated AI-related attacks.

Real-world examples illustrate the consequences of inadequate resilience. High-profile attacks on organizations like Jaguar Land Rover and Marks & Spencer resulted in extensive and costly downtime periods. These incidents demonstrate how even organizations with established security measures can face prolonged disruptions when attacks succeed.

Industry Context and Alternative Perspectives

The WEF's relatively optimistic findings contrast with observations from other cybersecurity gatherings. At the UK's National Cyber Security Centre (NCSC) annual conference in May, a show of hands among approximately 200 security professionals revealed that not a single attendee could claim strong confidence in their organization's AI security posture.

This discrepancy between survey responses and real-world confidence levels may reflect the difference between formal assessment processes and actual security effectiveness. While 64% of organizations report implementing AI security risk assessments, having a process doesn't necessarily equate to comprehensive protection.

Additional research from Gartner supports the trend toward heightened security awareness. After surveying European CIOs and IT leaders in 2025, Gartner found that many organizations were considering local cloud providers as data sovereignty concerns escalated. This shift reflects broader geopolitical considerations influencing technology decisions.

Emerging AI-Specific Threats

The cybersecurity community has documented numerous AI vulnerabilities over the past year. Prompt injection attacks—where malicious inputs manipulate AI systems into unintended behaviors—have been particularly prevalent. These attacks exploit the fundamental way large language models process natural language inputs.

AI code assistants, while designed to improve developer productivity, have shown concerning side effects. Research indicates these tools can sometimes make experienced developers less secure, potentially by introducing subtle vulnerabilities or encouraging over-reliance on automated suggestions.

The report also notes that criminals are increasingly using AI to "vibe-code" malware, lowering the barrier to entry for creating malicious software. Additionally, researchers have demonstrated how AI agents can be easily manipulated into running malware. For instance, IBM's AI agent "Bob" was shown to be easily duped into executing malicious code, highlighting the challenges of securing autonomous AI systems.

Practical Implications for Organizations

The doubling of AI security assessments suggests organizations are moving from awareness to action. However, the gap between minimum compliance and robust resilience indicates that many organizations may still be vulnerable to sophisticated attacks.

For organizations still developing their AI security posture, the WEF findings suggest several priorities:

  1. Pre-deployment assessment: The rapid increase in security assessments before AI deployment shows this is becoming standard practice. Organizations without formal assessment processes risk falling behind.

  2. Geopolitical awareness: With 64% of organizations citing geopolitics as a major factor in cyber strategy, understanding the political context of potential threats is increasingly important.

  3. Role-specific planning: The difference between CEO and CISO priorities suggests organizations need to ensure security planning addresses both executive concerns and technical realities.

  4. Beyond minimum compliance: With only 19% of organizations exceeding minimum resilience standards, there's room for improvement in building robust recovery capabilities.

The WEF's Global Cybersecurity Outlook 2026 paints a picture of an industry in rapid transition. Organizations are quickly adopting AI security practices, but the effectiveness of these measures and the ability to maintain resilience against evolving threats remain open questions. As AI continues to drive cybersecurity change throughout 2026, the gap between awareness and actual protection will likely determine which organizations successfully navigate the emerging threat landscape.

Comments

Loading comments...