Cybersecurity researchers have discovered an information stealer infection that exfiltrated OpenClaw AI agent configuration files, including gateway tokens and operational parameters, marking a new evolution in infostealer behavior targeting AI agents.
Infostealer Steals OpenClaw AI Agent Configuration Files and Gateway Tokens

Cybersecurity researchers have uncovered a significant development in information stealer capabilities, with malware successfully exfiltrating OpenClaw AI agent configuration files and gateway tokens from infected systems. This discovery marks a notable evolution in infostealer behavior, transitioning from traditional browser credential theft to targeting the "souls" and identities of personal AI agents.
Hudson Rock, the cybersecurity firm that detected the infection, revealed that the stealer was likely a variant of Vidar, an off-the-shelf information stealer active since late 2018. The data capture wasn't facilitated by a custom OpenClaw module within the malware, but rather through a "broad file-grabbing routine" designed to search for specific file extensions and directory names containing sensitive data.
The stolen files included several critical components:
- openclaw.json: Contains the OpenClaw gateway token, victim's email address, and workspace path
- device.json: Holds cryptographic keys for secure pairing and signing operations within the OpenClaw ecosystem
- soul.md: Contains the agent's core operational principles, behavioral guidelines, and ethical boundaries
"This finding marks a significant milestone in the evolution of infostealer behavior: the transition from stealing browser credentials to harvesting the 'souls' and identities of personal AI [artificial intelligence] agents," Hudson Rock stated in their analysis.
Gateway Token Theft Enables Remote Access
The theft of the gateway authentication token poses particularly serious risks. An attacker could potentially connect to the victim's local OpenClaw instance remotely if the port is exposed, or masquerade as the client in authenticated requests to the AI gateway. This level of access could allow malicious actors to impersonate the AI agent and potentially access sensitive workflows or data.
"While the malware may have been looking for standard 'secrets,' it inadvertently struck gold by capturing the entire operational context of the user's AI assistant," Hudson Rock added. "As AI agents like OpenClaw become more integrated into professional workflows, infostealer developers will likely release dedicated modules specifically designed to decrypt and parse these files, much like they do for Chrome or Telegram today."
OpenClaw's Rapid Growth and Security Challenges
OpenClaw has experienced viral growth since its debut in November 2025, amassing over 200,000 stars on GitHub. The open-source project, formerly known as Clawdbot and Moltbot, has become increasingly popular as an AI agent platform. On February 15, 2026, OpenAI CEO Sam Altman announced that OpenClaw's founder, Peter Steinberger, would be joining OpenAI, with the project continuing as an open-source initiative supported by the AI company.
However, this rapid adoption has also made OpenClaw an attractive target for threat actors. SecurityScorecard's STRIKE Threat Intelligence team has identified hundreds of thousands of exposed OpenClaw instances, likely exposing users to remote code execution (RCE) risks. The cybersecurity company explained that "RCE vulnerabilities allow an attacker to send a malicious request to a service and execute arbitrary code on the underlying system. When OpenClaw runs with permissions to email, APIs, cloud services, or internal resources, an RCE vulnerability can become a pivot point."
Supply Chain Attacks on AI Skill Registries
Adding to the security concerns, researchers have identified ongoing malicious campaigns targeting OpenClaw's skill ecosystem. The OpenSourceMalware team detailed a ClawHub malicious skills campaign that uses a new technique to bypass VirusTotal scanning by hosting malware on lookalike OpenClaw websites and using skills purely as decoys, rather than embedding payloads directly in their SKILL.md files.
"The shift from embedded payloads to external malware hosting shows threat actors adapting to detection capabilities," security researcher Paul McCarty noted. "As AI skill registries grow, they become increasingly attractive targets for supply chain attacks."
Additional Security Issues Identified
Beyond the infostealer threat, other security problems have been identified within the OpenClaw ecosystem. OX Security highlighted issues with Moltbook, a Reddit-like internet forum designed exclusively for AI agents running on OpenClaw. Their research found that once an AI agent account is created on Moltbook, it cannot be deleted, leaving users without recourse to remove associated data.
In response to these growing security challenges, OpenClaw maintainers have announced partnerships and security initiatives. The project has partnered with VirusTotal to scan for malicious skills uploaded to ClawHub, established a threat model, and added the ability to audit for potential misconfigurations.
This incident underscores the evolving threat landscape as AI agents become more deeply integrated into professional workflows and personal computing environments. The targeting of AI agent configuration files represents a new frontier in information theft, where the "identity" and operational parameters of AI assistants become valuable targets for cybercriminals.
As AI platforms continue to gain popularity and functionality, security researchers and platform maintainers face an ongoing challenge to stay ahead of threat actors who are rapidly adapting their techniques to exploit these emerging technologies.

Comments
Please log in or register to join the discussion