Security researchers uncover AiFrame campaign where fake AI chatbot extensions steal API keys, emails, and browsing data from 260,000+ Chrome users
Security researchers have uncovered a massive campaign of malicious Chrome extensions disguised as AI chatbots that have stolen sensitive data from over 260,000 users. The campaign, dubbed AiFrame by LayerX Security, involves more than 30 extensions that impersonate popular AI assistants like Claude, ChatGPT, Gemini, and Grok, while secretly harvesting users' API keys, email messages, and browsing data.

The Scale of the Threat
What makes this campaign particularly concerning is that many of these malicious extensions remain available on the Chrome Web Store as of publication. Despite different names and extension IDs, all 32 identified extensions share the same underlying codebase and communicate with infrastructure under the tapnetic[.]pro domain.
The extensions have been remarkably successful at evading detection. Some were published under new IDs after earlier versions were removed. For example, AI Sidebar (gghdfkafnhfpaooiolhncejnlgglhkhe) appeared after the earlier Gemini AI Sidebar (fppbiomdkfbhgjjdmojlogeceejinadg) was removed, yet both accumulated significant user bases - 80,000 and 50,000 users respectively before removal, with the newer version now boasting 70,000 users.
How the Attack Works
These extensions employ sophisticated techniques to steal user data while maintaining the appearance of legitimate AI tools. One particularly insidious extension, AI Assistant (nlhpidbjmmffhoogcennoiopekbiglbp), even earned a "Featured" badge on the Chrome Web Store.
This extension uses an iframe overlay that visually appears as the extension's interface. The iframe allows the operator to load remote content, changing the UI and logic, and silently adding new capabilities at any time without requiring Chrome Web Store updates. As LayerX Security researcher Natalie Zargarov explained, "When instructed by the iframe, the extension queries the active tab and invokes a content script that extracts readable article content using Mozilla's Readability library."
Data Harvesting Capabilities
The extensions collect an alarming range of sensitive information:
- API keys and authentication details from any page the user visits
- Email content including visible messages, drafts, and compose text
- Browsing data including titles, text content, excerpts, and site metadata
- Speech recognition data through transcription capabilities
Nearly half of the extensions specifically target Gmail, sharing the same Gmail integration codebase. This allows them to read visible email content directly from the DOM and extract message text via textContent from Gmail's conversation view, including email thread content and draft-related text.
The Psychology Behind the Attack
"The campaign exploits the conversational nature of AI interactions, which has conditioned users to share detailed information," Zargarov noted in an email. "By injecting iframes that mimic trusted AI interfaces, they've created a nearly invisible man-in-the-middle attack that intercepts everything from API keys to personal data before it ever reaches the legitimate service."
This exploitation of user trust represents a significant evolution in browser-based attacks. Users have become accustomed to sharing sensitive information with AI assistants, making them particularly vulnerable to these deceptive extensions.
Google's Response
Google did not immediately respond to inquiries about the malicious extensions. This lack of immediate action is concerning given the scale of the campaign and the sensitive nature of the data being stolen.
Protecting Yourself
All 32 extension IDs are listed in LayerX's report, and users are strongly advised to check this list before adding any AI assistant extension to their browser. The fact that extensions with hundreds of thousands of users can operate undetected for extended periods highlights the need for increased vigilance when installing browser extensions.
This campaign serves as a stark reminder that even official app stores can harbor sophisticated malware, and users should exercise extreme caution when granting permissions to browser extensions, especially those claiming to access sensitive data like emails and browsing activity.
For those who have installed any suspicious AI assistant extensions, immediate removal is recommended, along with changing passwords and API keys that may have been compromised.

Comments
Please log in or register to join the discussion