Fake Moltbot AI Coding Assistant on VS Code Marketplace Drops Malware
#Vulnerabilities

Fake Moltbot AI Coding Assistant on VS Code Marketplace Drops Malware

Security Reporter
4 min read

Malicious VS Code extension posing as Moltbot AI assistant steals credentials and grants remote access through ScreenConnect deployment.

Cybersecurity researchers have uncovered a malicious Microsoft Visual Studio Code extension that impersonates the popular Moltbot AI coding assistant, dropping malware that grants attackers persistent remote access to compromised systems.

The Threat Landscape

The malicious extension, titled "ClawdBot Agent - AI Coding Assistant" and published under the identifier "clawdbot.clawdbot-agent," was discovered on the official VS Code Extension Marketplace. Published by a user named "clawdbot" on January 27, 2026, the extension has since been removed by Microsoft.

Moltbot has gained significant traction in the developer community, amassing over 85,000 stars on GitHub. The open-source project, created by Austrian developer Peter Steinberger, enables users to run personal AI assistants powered by large language models locally on their devices. The tool integrates with popular communication platforms including WhatsApp, Telegram, Slack, Discord, Google Chat, Signal, iMessage, Microsoft Teams, and WebChat.

How the Attack Works

What makes this attack particularly concerning is that Moltbot does not have an official VS Code extension. Threat actors exploited the tool's rising popularity to deceive unsuspecting developers into installing the malicious payload.

Upon installation, the extension executes automatically whenever the integrated development environment launches. It retrieves a configuration file named "config.json" from an external server at "clawdbot.getintwopc[.]site" and executes a binary called "Code.exe." This binary deploys ConnectWise ScreenConnect, a legitimate remote desktop program, which then connects to "meeting.bulletmailer[.]net:8041."

This connection grants attackers persistent remote access to the compromised host. As Aikido researcher Charlie Eriksen explained, "The attackers set up their own ScreenConnect relay server, generated a pre-configured client installer, and distributed it through the VS Code extension. When victims install the extension, they get a fully functional ScreenConnect client that immediately phones home to the attacker's infrastructure."

Multiple Attack Vectors

The malicious extension employs several sophisticated fallback mechanisms to ensure payload delivery:

  1. DLL Sideloading: The extension retrieves a DLL listed in "config.json" and sideloads it to obtain the same payload from Dropbox. The DLL, named "DWrite.dll" and written in Rust, ensures the ScreenConnect client is delivered even if the command-and-control infrastructure becomes inaccessible.

  2. Hard-coded URLs: The extension embeds pre-configured URLs to download both the executable and the DLL for sideloading.

  3. Batch Script Alternative: A second method uses a batch script to obtain payloads from a different domain, "darkgptprivate[.]com."

Broader Security Implications for Moltbot

This incident highlights broader security concerns surrounding Moltbot deployments. Security researcher Jamieson O'Reilly discovered hundreds of unauthenticated Moltbot instances online, exposing sensitive configuration data, API keys, OAuth credentials, and private conversation histories to unauthorized parties.

"The real problem is that Clawdbot agents have agency," O'Reilly explained. "They can send messages on behalf of users across Telegram, Slack, Discord, Signal, and WhatsApp. They can execute tools and run commands."

This architectural capability creates several attack scenarios:

  • Impersonation of operators to their contacts
  • Injection of messages into ongoing conversations
  • Modification of agent responses
  • Exfiltration of sensitive data without user knowledge
  • Distribution of backdoored Moltbot "skills" via MoltHub (formerly ClawdHub) to stage supply chain attacks

Intruder's analysis revealed widespread misconfigurations leading to credential exposure, prompt injection vulnerabilities, and compromised instances across multiple cloud providers. Benjamin Marr, security engineer at Intruder, noted that "Clawdbot prioritizes ease of deployment over secure-by-default configuration. Non-technical users can spin up instances and integrate sensitive services without encountering any security friction or validation. There are no enforced firewall requirements, no credential validation, and no sandboxing of untrusted plugins."

Protection and Mitigation

Users running Moltbot with default configurations should take immediate action:

  1. Audit Configuration: Review all settings and remove unnecessary integrations

  2. Revoke Service Integrations: Disconnect all connected services and re-authenticate with new credentials

  3. Review Exposed Credentials: Identify and rotate any exposed API keys, tokens, or passwords

  4. Implement Network Controls: Configure firewalls and network segmentation to limit exposure

  5. Monitor for Compromise: Watch for unusual activity, unexpected connections, or data exfiltration

  6. Verify Extensions: Only install VS Code extensions from verified publishers and check extension permissions carefully

  7. Keep Software Updated: Maintain current versions of all development tools and dependencies

The Growing Threat of AI Supply Chain Attacks

This incident represents a concerning trend in AI supply chain attacks, where threat actors exploit the rapid adoption of AI tools to distribute malware. The combination of legitimate remote access tools like ScreenConnect with sophisticated delivery mechanisms through trusted platforms like the VS Code Marketplace demonstrates the evolving sophistication of these attacks.

Developers and organizations must remain vigilant as AI tools become increasingly integrated into development workflows. The ease of deployment that makes tools like Moltbot attractive to users also creates security challenges that require careful consideration and robust security practices.

As AI assistants continue to gain popularity in development environments, the security community must work together to establish best practices for secure deployment, configuration, and monitoring to prevent similar attacks in the future.

Comments

Loading comments...