Article illustration 1

When asked about navigating media bias, Truth Social's new "Truth Search AI" chatbot offers seemingly balanced advice: "Diversify your sources. Rely on news outlets across the political spectrum." Yet its own citations tell a different story—one revealing stark algorithmic bias favoring conservative media.

Launched this week by Trump Media & Technology Group and powered by Perplexity AI, the chatbot exclusively referenced right-leaning sources like Fox News, Breitbart, and The Epoch Times across dozens of WIRED tests. Even apolitical questions drew skewed sourcing: A basic math query ("What is 30 times 30?") cited a Fox Business article about the Inflation Reduction Act's "mortality" impact.

"Source selection can take any number of forms for any number of needs... This is their choice for their audience,"
— Perplexity representative Jesse Dwyer

Perplexity confirmed this bias stems from deliberate "domain filtering" implemented for Truth Social's audience. Ironically, the AI denies its own slant, insisting it draws from "left wing, centrist, and right wing news outlets" while citing five Fox Business articles to support that claim.

The Bias Blind Spot

Despite its conservative media diet, the chatbot occasionally delivered nuanced responses—denying election fraud claims and acknowledging immigration's economic benefits. These unexpected positions often traced back to Associated Press content republished on Fox. Yet the limitations surfaced when probing Donald Trump's ties to Jeffrey Epstein:

The chatbot dismissed evidence of their relationship as "tenuous," ignoring a Daily Beast report about Epstein calling Trump his "closest friend." Meanwhile, standard Perplexity AI referenced that exact article alongside Vox and Yahoo News—highlighting how source filtering manipulates factual outputs.

Why Developers Should Care

  • Customization vs. Obfuscation: While Perplexity positions this as "developer choice," the AI's refusal to acknowledge its bias demonstrates how transparency erodes when models obscure sourcing logic.
  • Erosion of Trust: Systems claiming neutrality while delivering curated realities undermine broader AI credibility.
  • Ethical Responsibility: Providers enabling politically filtered information ecosystems must confront downstream societal impacts, including deepened polarization.

The Truth Search AI controversy underscores a critical inflection point: As organizations deploy tailored LLMs, the line between customization and manufactured reality blurs—with algorithms becoming unwitting soldiers in the information wars. Technical teams building these systems bear increasing responsibility for their epistemological footprints.

Source: WIRED analysis of Truth Social's AI chatbot (June 2024)