ServiceNow Patches Critical AI Platform Flaw Allowing Unauthenticated User Impersonation
#Vulnerabilities

ServiceNow Patches Critical AI Platform Flaw Allowing Unauthenticated User Impersonation

Security Reporter
4 min read

ServiceNow has patched a critical vulnerability, CVE-2025-12420, that allowed unauthenticated attackers to impersonate any user—including administrators—by bypassing MFA and SSO protections. The flaw, dubbed 'BodySnatcher' by AppOmni, targeted the Virtual Agent integration and could have enabled attackers to weaponize AI agents for privilege escalation and data exfiltration.

ServiceNow disclosed a critical security vulnerability that exposed its AI platform to unauthenticated user impersonation attacks. Tracked as CVE-2025-12420 and carrying a CVSS score of 9.3, the flaw allowed attackers to bypass multi-factor authentication and single sign-on protections using only a target user's email address.

Featured image

The BodySnatcher Vulnerability

The vulnerability resided in the Virtual Agent API integration within ServiceNow's AI platform. According to AppOmni's security research, attackers could chain a hardcoded platform-wide secret with account-linking logic that blindly trusted email addresses. This combination allowed complete bypass of standard authentication controls.

Aaron Costello, Chief of SaaS Security Research at AppOmni, who discovered and reported the flaw in October 2025, described the severity: "BodySnatcher is the most severe AI-driven vulnerability uncovered to date: Attackers could have effectively 'remote controlled' an organization's AI, weaponizing the very tools meant to simplify the enterprise."

The attack vector specifically targeted two ServiceNow components:

  • Now Assist AI Agents (sn_aia): Versions 5.1.18 and 5.2.19 or later
  • Virtual Agent API (sn_va_as_service): Versions 3.15.2 and 4.0.4 or later

How the Exploit Works

The vulnerability enabled unauthenticated attackers to impersonate any ServiceNow user by exploiting weaknesses in the account-linking mechanism. The attack sequence involved:

  1. Authentication Bypass: Using a hardcoded secret to access the Virtual Agent API without credentials
  2. User Impersonation: Supplying any valid user's email address to establish a session as that user
  3. Privilege Escalation: If the target was an administrator, executing AI agents to modify security controls
  4. Persistence: Creating backdoor accounts with elevated privileges

This attack chain effectively neutralized MFA and SSO implementations, which are typically considered robust authentication barriers. The vulnerability was particularly dangerous because it required no user interaction and left minimal forensic evidence.

AI Platform Weaponization

What makes BodySnatcher especially concerning is its impact on ServiceNow's agentic AI capabilities. Once an attacker gained administrative access, they could leverage the platform's built-in AI agents to perform automated actions across the enterprise.

"By chaining a hardcoded, platform-wide secret with account-linking logic that trusts a simple email address, an attacker can bypass multi-factor authentication (MFA), single sign-on (SSO), and other access controls," Costello explained. "With these weaknesses linked together, the attacker can remotely drive privileged agentic workflows as any user."

This represents a shift in attack methodology where vulnerabilities in AI platforms don't just expose data—they provide attackers with automated tools to amplify their impact across the compromised environment.

Patch Deployment Timeline

ServiceNow addressed the vulnerability on October 30, 2025, deploying security updates to the majority of hosted instances. The company also shared patches with ServiceNow partners and self-hosted customers, though the timeline for on-premises deployments depends on individual organizations.

While there is currently no evidence of active exploitation in the wild, the severity of the vulnerability and the detailed public disclosure create a window where attackers could develop exploit code.

Broader Context of AI Platform Security

This disclosure comes just two months after AppOmni revealed another ServiceNow vulnerability involving default configurations in the Now Assist generative AI platform. That earlier flaw allowed second-order prompt injection attacks, enabling data exfiltration and privilege escalation through the AI's natural language processing capabilities.

The pattern suggests that as enterprises rapidly adopt AI-powered platforms, security testing has not kept pace with feature development. Traditional perimeter controls like MFA and SSO become ineffective when the underlying application logic contains authentication bypasses.

Remediation and Best Practices

Organizations using ServiceNow's AI platform should:

  1. Verify Patch Application: Confirm that all instances run the patched versions (sn_aia 5.1.18+/5.2.19+ and sn_va_as_service 3.15.2+/4.0.4+)
  2. Audit Access Logs: Review authentication logs for suspicious patterns, particularly unusual API access from the Virtual Agent endpoints
  3. Review AI Agent Permissions: Examine what automated actions AI agents can perform and limit administrative privileges
  4. Implement Additional Monitoring: Deploy runtime application security monitoring for ServiceNow instances to detect anomalous behavior

The vulnerability underscores the importance of securing not just user authentication, but also the underlying integration points that AI platforms use to communicate between components. As Costello noted, "It's the most severe AI-driven security vulnerability uncovered to date," highlighting the need for rigorous security testing of AI platform architectures.

ServiceNow customers should apply the available patches immediately and consider temporarily disabling Virtual Agent integrations if immediate patching is not feasible.

ServiceNow Security Advisory AppOmni Research Report CVE-2025-12420 Details

Comments

Loading comments...