Article illustration 1

In a small room in San Diego last week, a man in a black leather jacket explained to journalists how to save the world from destruction by AI. Max Tegmark, a notable figure in the AI-safety movement, believes that "artificial general intelligence," or AGI, could precipitate the end of human life. This briefing on an AI-safety index, which would be released the next day, revealed that no major company scored better than a C+ in preparedness for this existential threat.

The threat of technological superintelligence has long been the stuff of science fiction, yet it has become a topic of serious discussion in recent years. Despite the lack of clear definition—even OpenAI's CEO, Sam Altman, has called AGI a "weakly defined term"—the idea that powerful AI contains an inherent threat to humanity has gained acceptance among respected cultural critics.

This narrative stands in stark contrast to the reality of AI development today. Granted, generative AI is a powerful technology that has already had a massive impact on our work and culture. But superintelligence has become one of several questionable narratives promoted by the AI industry, alongside the ideas that AI learns like a human, that it has "emergent" capabilities, that "reasoning models" are actually reasoning, and that the technology will eventually improve itself.

The NeurIPS conference, officially the Conference on Neural Information Processing Systems, serves as a microcosm of this disconnect. Attendance has exploded from approximately 3,850 conference-goers in 2015 to 24,500 this year, according to organizers. The conference center's three main rooms each have the square footage of multiple blimp hangars, with speakers addressing audiences of thousands.

"I do feel we're on a quest, and a quest should be for the holy grail," Rich Sutton, the legendary computer scientist, proclaimed in a talk about superintelligence, encapsulating the grand aspirations that pervade the industry.

The corporate sponsors had booths to promote their accomplishments and impress attendees with their R&D visions. Companies like Google, Meta, Apple, Amazon, Microsoft, ByteDance, and Tesla were joined by lesser-known names such as Runpod, Poolside, and Ollama. One company, Lambda, was advertising itself as the "Superintelligence Cloud." conspicuously absent from the exhibitor hall were OpenAI, Anthropic, and xAI, whose cachet is already so great that setting up a booth would be pointless.

The conference is a primary battleground in AI's talent war. Much of the recruiting effort happens outside the conference center itself, at semisecret, invitation-only events in downtown San Diego. These events captured the ever-growing opulence of the industry. In a lounge hosted by the Laude Institute, an AI-development support group, a grad student told me about starting salaries at various AI companies of "a million, a million five," of which a large portion was equity. The lounge was designed in the style of a VIP lounge at a music festival, located at the top of the Hard Rock Hotel.

The place to be, if you could get in, was the party hosted by Cohere, a Canadian company that builds large language models. (Cohere is being sued for copyright and trademark infringement by a group of news publishers, including The Atlantic.) The party was held on the USS Midway, an aircraft carrier used in Operation Desert Storm, which is now docked in the San Diego harbor. The purpose, according to the event's sign-up page, was "to celebrate AI's potential to connect our world."

With the help of a researcher friend, I secured an invite to a mixer hosted by the Mohamed bin Zayed University of Artificial Intelligence, the world's first AI-focused university, named for the current UAE president. Earlier this year, MBZUAI established the Institute for Foundation Models, a research group in Silicon Valley. The event, held at a steak house, had an open buffet with oysters, king prawns, ceviche, and other treats. Upstairs, Meta was hosting its own mixer. According to rumor, some of the researchers downstairs were Meta employees hoping to be poached by the Institute for Foundation Models, which supposedly offered more enticing compensation packages.

The disconnect between AGI talk and actual research is perhaps most evident in the conference's academic program. Of 5,630 papers presented in the poster sessions at NeurIPS, only two mention AGI in their title. An informal survey of 115 researchers at the conference suggested that more than a quarter didn't even know what AGI stands for.

At the same time, the idea of AGI, and its accompanying prestige, seemed at least partly responsible for the lavish amenities. The buffet certainly wasn't paid for by chatbot profits. OpenAI, for instance, reportedly expects its massive losses to continue until 2030. How much longer can the industry keep the ceviche coming? And what will happen to the economy, which many believe is propped up by the AI industry, when it stops?

In one of the keynote speeches, the sociologist and writer Zeynep Tufekci warned researchers that the idea of superintelligence was preventing them from understanding the technology they were building. The talk, titled "Are We Having the Wrong Nightmares About AI?," mentioned several dangers posed by AI chatbots, including widespread addiction to chatbots and the undermining of methods for establishing truth.

After Tufekci gave her talk, the first audience member to ask a question appeared annoyed. "Have you been following recent research?" the man asked. "Because that's the exact problems we're trying to fix. So we know of these concerns." Tufekci responded, "I don't really see these discussions. I keep seeing people discuss mass unemployment versus human extinction."

It struck me that both might be correct: that many AI developers are thinking about the technology's most tangible problems while public conversations about AI—including those among the most prominent developers themselves—are dominated by imagined ones.

Even the conference's name contained a contradiction: The name "NeurIPS" is short for "Neural Information Processing Systems," but artificial neural networks were conceived in the 1940s by a logician-and-neurophysiologist duo who wildly underestimated the complexity of biological neurons and overstated their similarity to a digital computer.

Regardless, a central feature of AI's culture is an obsession with the idea that a computer is a mind. Anthropic and OpenAI have published reports with language about chatbots being, respectively, "unfaithful" and "dishonest." In the AI discourse, science fiction often defeats science.

On the roof of the Hard Rock Hotel, I attended an interview with Yoshua Bengio, one of the three "godfathers" of AI. Bengio, a co-inventor of an algorithm that makes ChatGPT possible, recently started a nonprofit called LawZero to encourage the development of AI that is "safe by design." He took the nonprofit's name from a law featured in several Isaac Asimov stories that states that a robot should not allow humans to be harmed.

Bengio was concerned that, in a possible dystopian future, AIs might deceive their creators and that "those who will have very powerful AIs could misuse it for political advantage, in terms of influencing public opinion."

I looked around to see if anyone else was troubled by the disconnect. Bengio did not mention how fake videos are already affecting public discourse. Neither did he meaningfully address the burgeoning chatbot mental-health crisis, or the pillaging of the arts and humanities. The catastrophic harms, in his view, are "three to 10 or 20 years" away. We still have time "to figure it out, technically."

Bengio has written elsewhere about the more immediate dangers of AI. But the technical and speculative focus of his remarks captures the sentiment among technologists who now dominate the public conversation about our future. Ostensibly, they are trying to save us, but who actually benefits from their predictions?

As I spoke with 25-year-olds entertaining seven-figure job offers and watched the industry's millionaire luminaries debate the dangers of superintelligence, the answer seemed clear.

Source: This article is based on reporting from The Atlantic, originally published at https://www.theatlantic.com/technology/2025/12/neurips-ai-bubble-agi/685250/