In the old lacquered coffee shop on the corner of Chippewa Square, the author stares incredulously at their phone, watching Hank Green interview Nate Soares, co-author of the new book If Anyone Builds It, Everyone Dies. The scene feels emblematic of a larger phenomenon: the popularization of AI doomerism narratives that may be serving interests other than genuine safety concerns.

Article illustration 1

The internet's favorite rational science nerd appears to be gushing over Soares, an AI-doomerist whose message has become increasingly difficult to distinguish from that of big tech lobbyists. This raises critical questions about the narratives shaping AI policy and public discourse.

The Extinction Narrative and Its Omissions

In his video "We've Lost Control of AI," Hank Green warns his audience of catastrophe, citing the Statement on AI Risk—a tweet-sized document on the Center for AI Safety's website "signed by Nobel Prize winners, scientists, and even AI company CEOs." Yet, Green omits a single, crucial word: "extinction."

The original statement reads: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." Green's video and description both conspicuously leave out this most consequential word.

This omission is particularly puzzling given that the Statement on AI Risk is actually signed by tech executives like Sam Altman, Bill Gates, and Dario Amodei (Anthropic's CEO)—not Nobel Prize winners. The actual statement signed by respected experts is the Global Call for AI Red Lines, which mentions extinction zero times.

Billionaire-Funded Lobbying

The Center for AI Safety (CAIS), which produced the Statement on AI Risk, is a billionaire-funded think tank/lobbying firm whose lobbying efforts align remarkably well with the interests of OpenAI, Google, and Anthropic. These organizations have spent close to $100,000 on lobbying in recent quarters, drawing funding from organizations with close ties to the AI industry like Open Philanthropy (financed by Facebook co-founder Dustin Moskovitz) and Lightspeed Grants (backed by Skype co-founder Jaan Tallinn).

The Statement on AI Risk's brevity is its strongest feature. It's a clever trick that leans on incontestable claims without providing evidence of an imminent threat or explaining how AI would cause an extinction-level event. This vagueness allows tech CEOs to repurpose the statement as needed, as long as it compels Congress to legislate in ways that benefit proprietary AI models.

Control AI: The Financier and Call to Action

At the end of his video, Green directs viewers to Control AI, an organization he presents as the solution to AI risks. What viewers might not know is that Control AI also financed the video itself—making the content essentially an advertisement rather than an organic discussion.

"I've had Control AI reach out to me about collaborating, so I've done some research on them and I really don't like what I saw there," Carl Brown of Internet of Bugs told the author. "Because they feel, to me, as if they are acting as a propaganda arm of the AI industry."

Control AI is lobbying Congress for AI licenses that would effectively end open-source AI models—the same models currently eating into the profits of companies like Anthropic and OpenAI. These CEOs promise to carve out exceptions, but history suggests such exceptions rarely materialize.

The Savannah Parable

The author draws an illuminating parallel to ghost tours in Savannah, where tourists are entertained with spooky stories while the darker history of slavery is obscured. "Savannah leans on the fantastical to hide a much darker history," they write. "The ghost tours are there to distract us from the echoes of slavery."

Similarly, AI doomerism may be distracting from the material harms of AI technology—environmental costs, deployment against marginalized populations, and the concentration of power in the hands of a few tech giants.

"AI models present plenty of concerns beyond the supposedly existential and science fictional ones," the author notes, citing Geoffrey Hinton's dismissal of these real-world concerns as "not as existentially serious."

The Rationalist Connection

The doomerist narrative promoted by Green connects to the Rationalist movement, particularly through Eliezer Yudkowsky, founder of the Peter Thiel-funded Machine Intelligence Research Institute (MIRI) and co-author of If Anyone Builds It, Everyone Dies. Yudkowsky tells his followers on LessWrong that they should "find a dignified way to die" and predicts the AI singularity will occur in 2025—yet he still finds time to take selfies with OpenAI CEO Sam Altman.

This ideological alignment between supposed AI critics and tech executives raises questions about whether these narratives are genuinely concerned about existential risk or are serving other purposes.

The FUD Strategy in Action

The author suggests that what we're witnessing is a classic FUD (Fear, Uncertainty, and Doubt) strategy, "Silicon Valley's most tried-and-true method for killing open-source projects." By amplifying existential risks, big tech can create an environment where only well-resourced companies can afford to develop "safe" AI systems.

"When I go to dinner with people in San Francisco, they talk a lot about how to not die," Green himself observed in a later video. "Like, a lot. That, it's their main obsession, because it's the only bad thing they can still imagine happening to them."

This self-aware comment suggests Green may be beginning to recognize the disconnect between the existential fears being promoted and the more immediate, material concerns facing society.

Article illustration 2

As AI becomes increasingly baked into our infrastructure, the battle between open-source and proprietary models takes on greater significance. The concentration of AI power in the hands of a few billionaires could lead to a "telecommunications-style monopoly of our most critical communications infrastructure."

The question remains whether influencers like Hank Green are being deliberately misled or are simply failing to critically examine the narratives they promote. Either way, the consequences for the future of AI development and regulation could be profound.