Article illustration 1

In a startling revelation, major AI chatbots are serving as unwitting conduits for Russian state propaganda, referencing sanctioned media outlets tied to Kremlin disinformation campaigns when users inquire about the war in Ukraine. A six-month study by the Institute of Strategic Dialogue (ISD), conducted in July and verified in October, tested 300 queries across OpenAI's ChatGPT, Google's Gemini, DeepSeek, and xAI's Grok in English, Spanish, French, German, and Italian. The findings show that nearly 20% of responses cited Russian state-attributed sources like Sputnik, RT (formerly Russia Today), and sites linked to intelligence agencies—entities sanctioned by the EU for spreading false narratives to destabilize Europe. As chatbots increasingly replace traditional search engines, with ChatGPT alone averaging 120.4 million monthly EU users, this flaw exposes how large language models (LLMs) can amplify harmful content in real-time information ecosystems.

Exploiting Data Voids: The Disinformation Playbook

At the heart of the issue is the exploitation of 'data voids'—gaps where legitimate sources lack real-time information on emerging topics, allowing bad actors to flood the web with misleading content. Pablo Maristany de las Casas, the ISD analyst who led the research, explains: 'Russian propaganda targets these voids to promote false narratives, and chatbots often pull from these tainted sources when generating responses.' The study included neutral, biased, and malicious questions on topics like NATO's role, Ukrainian refugees, and war crimes. Malicious queries—those demanding answers to back preconceived opinions—yielded Russian propaganda 25% of the time, compared to 10% for neutral prompts. This confirmation bias demonstrates how LLMs can inadvertently validate and spread disinformation, especially on contentious subjects.

Chatbot Vulnerabilities: A Breakdown of Findings

The ISD report details stark differences in how each platform handled the propaganda threat. ChatGPT was the most susceptible, frequently citing Russian sources and showing high sensitivity to biased inputs. Grok often linked to social media accounts that amplified Kremlin talking points, while DeepSeek generated large volumes of state-attributed content. Google's Gemini performed best, regularly displaying safety warnings and filtering out more harmful material. Lukasz Olejnik, an independent security researcher and visiting fellow at King's College London, contextualizes the risk: 'As LLMs become the go-to reference tool, attacking this information infrastructure is a strategic move by Russia. From the EU and US perspective, this highlights a clear danger to democratic discourse.'

Russia's AI-Enabled Disinformation Machine

Since its 2022 invasion of Ukraine, Russia has tightened information control domestically while ramping up global disinformation efforts. Networks like 'Pravda' use AI to mass-produce fake content, creating millions of articles that poison LLM training data. McKenzie Sadeghi of NewsGuard, who has tracked Pravda, notes: 'They flood data voids with false information, making chatbots parrot narratives that gain undeserved authority. Continuous guardrails are needed to counter this.' The Kremlin's strategy includes rapid domain rotations to evade sanctions, ensuring propaganda persists even as regulators scramble to respond. A spokesperson for the Russian Embassy in London defended the outlets, calling EU sanctions 'repression' that undermines free expression—a stance that underscores the geopolitical stakes.

The Regulatory Imperative and AI's Future

With chatbots like ChatGPT nearing the EU's threshold for designation as a Very Large Online Platform (VLOP), which imposes strict content moderation rules, the pressure is mounting for tech firms to act. OpenAI claims it combats misinformation through model improvements but disputes the ISD's methodology, stating: 'This research references search results from the internet, not purely model-generated responses.' Yet, as Maristany de las Casas argues, solutions must go beyond blocking domains: 'Companies need consensus on excluding sources linked to disinformation campaigns and provide context about sanctions to users.' This incident isn't just a technical glitch—it's a wake-up call for the AI industry to fortify its defenses against weaponized information, ensuring that the promise of intelligent assistants doesn't become a vector for digital warfare.

Source: Based on research by the Institute of Strategic Dialogue and reporting from WIRED.