Article illustration 1

A seismic shift is brewing in AI policy advocacy. Facing political headwinds and stalled regulatory progress, prominent voices in AI safety—including figures like Eliezer Yudkowsky and organizations like PauseAI and the Machine Intelligence Safety Accelerator (MATS)—are pivoting toward building a mass movement. Their goal: create widespread public salience around existential AI risks to pressure policymakers. Yet this strategy, argues policy analyst Anton Leicht, is fraught with peril and may ultimately sabotage the very cause it seeks to advance.

The Allure and Architecture of a Movement

Proponents see movement-building as necessary leverage. With elite-focused advocacy struggling against industry lobbying and shifting US political priorities, they believe only mobilized public demand can force meaningful safety regulations. The blueprint involves:
1. Simplified Messaging: Translating complex technical risks (like loss-of-control scenarios) into broadly resonant themes—job displacement, inequality, or anti-corporate sentiment.
2. Community Building: Creating local and global networks united by concern about AI's dangers.
3. Political Mobilization: Channeling this constituency toward specific policy demands.

However, Leicht argues this approach fundamentally misunderstands the nature of both AI safety and effective movements.

Why AI Safety is a Poor Fit for Mass Mobilization

  1. The Moving Target Problem: Effective AI safety policy requires extreme technical agility. Threats evolve rapidly (e.g., shifts in model architectures, compute thresholds, geopolitical dynamics). Movements thrive on simple, static demands ("Stop X!" "Regulate Y!"). A movement anchored to today's policy ask (e.g., compute caps) becomes obsolete—or obstructive—tomorrow. As Leicht previously detailed, the technical landscape shifts too fast for popular movements to track accurately.

  2. Lack of Observable Wins: Successful movements rally around tangible victories (e.g., passed laws, visible environmental cleanup). AI safety lacks clear, public metrics of success. Is an AI model made safer by being kept internal? Is reduced external deployment progress or a setback? This ambiguity starves movements of motivating achievements.

  3. Inevitable Capture & Drift: Movements built on broad coalitions (fear of job loss, anti-tech sentiment, inequality concerns) are vulnerable to mission drift. The core safety message risks being drowned out or hijacked by louder, simpler adjacent agendas.


    alt="Article illustration 5"
    loading="lazy">

highlights how the climate movement expanded beyond its initial scientific focus. Leicht warns: "Safety-focused ideas can quickly lose their grasp on the movement they helped build."

Concrete Harms: Credibility Erosion and the Astroturf Trap

The dangers aren't merely theoretical:

  • Undermining Organic Support: Existing public concern about AI (evident in opposition to a federal moratorium on state AI laws) holds genuine political weight. A visibly "incubated" movement risks painting all safety support as astroturfed (artificially manufactured). Policymakers, already wary of funding ties in the safety space, may dismiss genuine grassroots concern as orchestrated campaigns. Leicht notes: "AI safety risks squandering a rare instance of the public already on its side."

  • Tarnishing Expert Advocacy: Elite safety researchers and policy experts rely on credibility. Association with a chaotic or perceived "crackpot" movement—especially one prone to over-simplification or protest excesses—damages this vital asset. Leicht draws parallels to the reputational hit EA took from the FTX collapse: "Whenever the movement missteps, it will serve as grounds for attacks against the general safety platform."

  • The Radical Flank Effect Backfire: While radical elements can sometimes benefit moderates by comparison, this requires clear separation. Funding and personnel links between nascent safety movements and established organizations make such separation impossible. Protests or controversial tactics will directly stain core safety institutions.

A Narrower Path Forward

Leicht doesn't advocate apathy but suggests a more targeted strategy:
1. Issue-Specific Coalitions: Build alliances around narrower, tractable issues with natural constituencies (e.g., artists on copyright, labor on job impacts, child safety groups on CSAM risks). These resist drift and minimize guilt-by-association for broader safety efforts.
2. Protect Elite Credibility: Maintain focus on expert research and nuanced policy engagement. Funders and organizations should publicly distance themselves from broad movement-building efforts.
3. Avoid Over-Politicization: Resist the urge to "flip the gameboard" with populist tactics that could derail the complex, technical policy work actually needed.

The push for a safety movement reflects genuine frustration and urgency. But harnessing the volatile power of mass politics threatens to trade the field's hard-won credibility and technical focus for a force it cannot reliably steer—potentially setting back the cause of safe AI development when precision is paramount.

Source: Don’t Build An AI Safety Movement by Anton Leicht