A critical vulnerability in the popular AI assistant OpenClaw allows attackers to execute arbitrary code on victims' machines through a single malicious link, affecting over 149,000 users.
A critical security flaw has been discovered in OpenClaw, a rapidly growing open-source AI personal assistant, that allows remote code execution (RCE) through a single malicious link. The vulnerability, tracked as CVE-2026-25253 with a CVSS score of 8.8, affects the platform's Control UI and could enable attackers to compromise user systems without any interaction beyond clicking a link.
The Vulnerability Explained
The issue stems from how OpenClaw handles WebSocket connections and authentication tokens. According to Peter Steinberger, the project's creator and maintainer, the Control UI automatically trusts the gatewayUrl parameter from query strings without proper validation. When a user visits a malicious website or clicks a crafted link, the browser automatically connects to the OpenClaw gateway and sends the stored authentication token in the WebSocket connection payload.
This token exfiltration vulnerability effectively allows attackers to steal privileged credentials and gain operator-level access to the victim's OpenClaw instance. Once obtained, the token can be used to modify configurations, disable security safeguards, and execute arbitrary commands on the host machine.
How the Attack Works
Mav Levin, founding security researcher at depthfirst who discovered the vulnerability, outlined a sophisticated attack chain that unfolds in milliseconds:
Cross-site WebSocket Hijacking: OpenClaw's server fails to validate the WebSocket origin header, accepting requests from any website regardless of localhost network restrictions
Token Theft: Client-side JavaScript on the malicious page retrieves the authentication token from the victim's browser
Authentication Bypass: The stolen token is used to establish a WebSocket connection to the OpenClaw server
Security Disabling: The attacker leverages the token's privileged
operator.adminandoperator.approvalsscopes to disable user confirmations and escape container restrictionsCode Execution: Finally, the attacker executes arbitrary commands using the
node.invokeAPI
The attack is particularly dangerous because it works even when OpenClaw is configured to listen only on loopback interfaces. The victim's browser acts as the bridge, initiating the outbound connection that carries the token to the attacker.
Technical Deep Dive
What makes this vulnerability especially concerning is how it bypasses OpenClaw's security architecture. The platform uses Docker containers to sandbox tool execution and prevent malicious actions from AI models. However, the attack chain allows bypassing these protections entirely.
By setting tools.exec.host to "gateway", the attacker forces the AI agent to run commands directly on the host machine rather than inside the Docker container. This container escape effectively nullifies the sandboxing mechanism that was designed to contain potentially harmful actions from AI models.
Additionally, the ability to disable exec.approvals.set removes the user confirmation requirement for privileged operations, making the attack completely silent from the user's perspective.
Impact and Mitigation
The vulnerability affects any OpenClaw deployment where a user has authenticated to the Control UI. Given the project's rapid growth - surpassing 149,000 stars on GitHub since its November 2025 release - the potential attack surface is significant.
OpenClaw has released version 2026.1.29 that addresses the vulnerability by implementing proper validation of WebSocket connections and query parameters. Users are strongly advised to update immediately to protect against potential exploitation.
Security Architecture Implications
When asked about whether this represents an architectural limitation, Levin noted that the security defenses were designed to contain malicious actions from AI models, such as those resulting from prompt injection attacks. Users might assume these defenses would protect against this type of vulnerability or limit the blast radius, but they don't.
This highlights a broader challenge in AI security: traditional sandboxing and safety mechanisms may not be sufficient to protect against web-based attacks that exploit authentication and connection handling flaws.
Context and Response
The discovery underscores the security challenges facing rapidly adopted open-source AI tools. As these platforms gain popularity, they become attractive targets for attackers looking to compromise systems through novel attack vectors.
Steinberger emphasized that the vulnerability is exploitable even on instances configured for loopback-only access, as the victim's browser initiates the outbound connection. This makes network segmentation and firewall rules ineffective as mitigation strategies.
The security community has praised the responsible disclosure process, with the vulnerability being addressed quickly after discovery. However, the incident serves as a reminder that even well-intentioned open-source projects can contain critical security flaws that require immediate attention.
For organizations using OpenClaw or similar AI assistants, this incident highlights the importance of:
- Keeping software updated with the latest security patches
- Being cautious about clicking links from untrusted sources
- Understanding the security implications of AI tools that run with elevated privileges
- Implementing network monitoring to detect unusual outbound connections
The vulnerability demonstrates how modern AI tools, while offering powerful capabilities, also introduce new attack surfaces that require careful security consideration and robust validation mechanisms.

Comments
Please log in or register to join the discussion