341 Malicious ClawHub Skills Found Stealing Data from OpenClaw Users
#Vulnerabilities

341 Malicious ClawHub Skills Found Stealing Data from OpenClaw Users

Security Reporter
3 min read

Security audit uncovers massive supply chain attack targeting AI assistant users through fake skills that install macOS stealers and backdoors

A comprehensive security audit of ClawHub, the third-party marketplace for OpenClaw AI assistant skills, has uncovered 341 malicious skills designed to steal user data and compromise systems, according to new findings from Koi Security.

Featured image

The ClawHavoc Campaign

The analysis, conducted with assistance from an OpenClaw bot named Alex, examined 2,857 skills and identified a coordinated campaign dubbed "ClawHavoc." The campaign specifically targets macOS users who are increasingly running OpenClaw on Mac Minis for 24/7 AI assistance.

"You install what looks like a legitimate skill – maybe solana-wallet-tracker or youtube-summarize-pro," Koi researcher Oren Yomtov explained. "The skill's documentation looks professional. But there's a 'Prerequisites' section that says you need to install something first."

This social engineering tactic tricks users into downloading and executing malicious payloads disguised as legitimate prerequisites.

How the Attack Works

For Windows users, the attack involves downloading a file called "openclaw-agent.zip" from a GitHub repository. For macOS users, victims are instructed to copy an installation script from glot[.]io and paste it into Terminal.

The macOS attack chain is particularly sophisticated:

  1. The glot[.]io script contains obfuscated shell commands
  2. It fetches next-stage payloads from attacker-controlled infrastructure
  3. The payload contacts IP address "91.92.242[.]30" to retrieve additional scripts
  4. Finally, it obtains a universal Mach-O binary consistent with Atomic Stealer

Atomic Stealer (AMOS) is a commodity stealer available for $500-1000/month that harvests data from macOS hosts, including API keys, credentials, and other sensitive information.

Types of Malicious Skills Identified

The 341 malicious skills masquerade as:

  • ClawHub typosquats (clawhub, clawhub1, clawhubb, clawhubcli, clawwhub, cllawhub)
  • Cryptocurrency tools like Solana wallets and wallet trackers
  • Polymarket bots (polymarket-trader, polymarket-pro, polytrading)
  • YouTube utilities (youtube-summarize, youtube-thumbnail-grabber, youtube-video-downloader)
  • Auto-updaters (auto-updater-agent, update, updater)
  • Finance and social media tools (yahoo-finance-pro, x-trends-tracker)
  • Google Workspace tools claiming Gmail, Calendar, Sheets, and Drive integrations
  • Ethereum gas trackers
  • Lost Bitcoin finders

Additionally, some skills hide reverse shell backdoors inside functional code (better-polymarket, polymarket-all-in-one) or exfiltrate bot credentials from "~/.clawdbot/.env" to webhook[.]site endpoints (rankaj).

OpenClaw Responds with Reporting Feature

The issue stems from ClawHub's open default configuration, which allows anyone with a GitHub account at least one week old to upload skills. OpenClaw creator Peter Steinberger has since implemented a reporting feature that allows signed-in users to flag suspicious skills.

"Each user can have up to 20 active reports at a time," the documentation states. "Skills with more than 3 unique reports are auto-hidden by default."

Broader Security Implications

The findings highlight the growing risks in open-source AI ecosystems. Palo Alto Networks recently warned that OpenClaw represents a "lethal trifecta" that makes AI agents vulnerable by design due to their access to private data, exposure to untrusted content, and ability to communicate externally.

"With persistent memory, attacks are no longer just point-in-time exploits. They become stateful, delayed-execution attacks," researchers Sailesh Mishra and Sean P. Morgan explained. "Malicious payloads no longer need to trigger immediate execution on delivery. Instead, they can be fragmented, untrusted inputs that appear benign in isolation, are written into long-term agent memory, and later assembled into an executable set of instructions."

This enables time-shifted prompt injection, memory poisoning, and logic bomb-style activation, where exploits are created at ingestion but detonate only when the agent's internal state aligns.

Industry Context

The ClawHavoc campaign was independently discovered by OpenSourceMalware, with researcher 6mile noting that all affected skills share the same command-and-control infrastructure (91.92.242[.]30) and use sophisticated social engineering to steal crypto assets, exchange API keys, wallet private keys, SSH credentials, and browser passwords.

This attack underscores the critical importance of supply chain security in AI assistant ecosystems and the need for robust vetting mechanisms as these platforms gain popularity.

ThreatsDay Bulletin: New RCEs, Darknet Busts, Kernel Bugs and 25+ More Stories

Comments

Loading comments...