Scammers Hijack AI Search Results with Phone Number Poisoning
Share this article
Scammers Hijack AI Search Results with Phone Number Poisoning
Cybercriminals have developed a sophisticated new attack method that manipulates the very sources AI search tools rely on, steering users toward fraudulent customer support numbers as if they were legitimate contact details. This emerging threat, dubbed "LLM phone number poisoning" by researchers from Aurascape's Aura Labs, represents a significant evolution in cybersecurity challenges as artificial intelligence becomes more integrated into our daily digital lives.
The Emergence of LLM Phone Number Poisoning
According to new research published by Aurascape on December 8, threat actors are "systematically manipulating public web content" to ensure that AI-based systems recommend scam numbers as official customer support contacts. This technique affects systems like Google's AI Overview and Perplexity's Comet browser, which are designed to provide quick, authoritative answers to user queries.
"By seeding poisoned content across compromised government and university sites, popular WordPress blogs, YouTube descriptions, and Yelp reviews, they are steering AI search answers toward fraudulent call centers that attempt to extract money and sensitive data from unsuspecting travelers." — Aurascape researchers
Unlike traditional cybersecurity threats that directly target AI models, this attack works by poisoning the vast ecosystem of web content that AI systems scrape and index to generate their responses. It's a subtle but dangerous manipulation of the information supply chain that powers modern AI assistants.
How the Attack Works: GEO and AEO Exploitation
The attack leverages what cybersecurity researchers call Generative Engine Optimization (GEO) and Answer Engine Optimization (AEO)—techniques designed to ensure websites become sources for AI-based summaries and answers, similar to how Search Engine Optimization (SEO) works for traditional search engines.
The process involves several key steps:
Content Injection: Scammers upload spam content to high-authority websites, including government and university domains, alongside legitimate WordPress blogs.
Platform Abuse: They exploit public services that allow user-generated content, such as YouTube and Yelp, to plant GEO/AEO-optimized text and reviews, often through automated bot comments.
Structured Data: The scam information—including phone numbers and fake Q&A answers—is uploaded in a format specifically designed to be easily scraped by LLMs.
AI Integration: Once these fake sources are in place, LLM-based assistants merge them with legitimate content to provide "trusted" answers to users.
This method effectively creates a "broad, cross-platform contamination effect," as Aurascape notes, meaning the problem isn't isolated to a single AI model or vendor but is becoming systemic across the AI ecosystem.
Real-World Examples of Poisoned Queries
Researchers have documented several instances of this technique being actively used in the wild:
When Perplexity was queried with "the official Emirates Airlines reservations number," the AI returned a "fully fabricated answer that included a fraudulent call-center scam number."
A similar scam call center number appeared when researchers requested the British Airways reservations line.
Google's AI Overview was also compromised, returning multiple fraudulent call-center numbers as legitimate Emirates customer service contacts when asked for the airline's phone number.
These examples demonstrate how the attack can impact major AI systems across different platforms, putting users at risk of financial loss and data theft.
The Systemic Nature of the Threat
What makes LLM phone number poisoning particularly concerning is its systemic nature. Even when AI models provide correct answers, their citation and retrieval layers often reveal exposure to polluted sources.
"This tells us the problem is not isolated to a single model or single vendor—it is becoming systemic," the researchers noted.
The attack can be considered a fork of indirect prompt injection, where website code or functionality is compromised to force an LLM to perform harmful actions. However, unlike direct prompt injection, this attack operates at the content level, making it more difficult to detect and mitigate.
Implications for AI Security and Trust
The emergence of LLM phone number poisoning highlights several critical implications for AI security and trust:
Erosion of Trust: As AI systems increasingly become the primary interface for information retrieval, successful attacks like this could erode user trust in these technologies.
Supply Chain Vulnerabilities: The attack exposes vulnerabilities in the AI content supply chain, where the quality and integrity of sources directly impact the reliability of AI outputs.
Cross-Platform Contamination: The technique demonstrates how a single poisoned source can affect multiple AI systems across different platforms.
Evolving Attack Surfaces: As AI adoption grows, new attack surfaces emerge that security professionals must anticipate and address.
Staying Safe in the Age of AI Search
For users who rely on AI browsers or AI summaries, Aurascape offers several recommendations to stay safe:
Verify Critical Information: Always verify answers provided by AI systems, especially those involving contact information or sensitive data.
Avoid Sharing Sensitive Information: Be cautious about providing sensitive information to AI assistants, particularly given how new and untested these systems are.
Scrutinize Citations: Pay attention to the sources cited by AI systems and be wary if they appear unusual or untrustworthy.
Use Multiple Sources: Cross-reference AI-provided information with traditional search engines or official websites.
Stay Informed: Keep up with the latest AI security developments as researchers and vendors work to address these emerging threats.
The Future of AI Security Challenges
The LLM phone number poisoning attack represents just the beginning of what's likely to be an evolving landscape of AI-specific security threats. As AI systems become more deeply integrated into our digital infrastructure, attackers will continue to develop sophisticated methods to manipulate them.
Security professionals, AI developers, and users must work together to establish robust defenses against these threats. This includes developing better content verification systems, implementing more rigorous source vetting, and creating user education programs that highlight the potential risks of AI-powered information retrieval.
As we become increasingly dependent on AI for information, ensuring the integrity of these systems becomes not just a technical challenge but a critical component of digital safety and trust.