Article illustration 1

The tech world watched with bated breath last month as OpenAI CEO Sam Altman unveiled GPT-5, an AI model hyped as humanity's next leap toward artificial general intelligence (AGI)—a system smarter than all human minds combined. Expectations soared to near-mythic proportions, drawing comparisons to the tension before the first atomic bomb test. Yet, the launch fizzled. Instead of AGI, OpenAI delivered a 'model router' that optimizes computational resources for simpler queries—a cost-saving tweak that industry critic Ed Zitron described as adding little user value while lowering OpenAI's cloud expenses. As one observer quipped: 'We wanted godlike AI and got a traffic cop for server loads.'

This anticlimax wasn't just a product flop; it signaled a deeper crisis in AI's core methodology. For years, breakthroughs like GPT-3 and GPT-4 relied on 'pre-training scaling'—making models exponentially larger to unlock new capabilities. But GPT-5's stagnation suggests this approach is hitting a wall, echoing warnings from pioneers like Ilya Sutskever, OpenAI's former chief scientist. Sutskever, who co-invented foundational neural networks, stunned the industry by leaving in 2024 amid Altman's leadership turmoil. His subsequent academic talks hinted at scaling's limits, a view now validated by GPT-5's underwhelming performance.

Dr. Gary Marcus, a longtime AGI skeptic, sees this as inevitable. 'LLMs alone are not the royal road to AGI,' he told The American Prospect, citing their probabilistic nature as a fatal flaw. Unlike deterministic software, LLMs generate outputs based on statistical patterns, not logic, leading to hallucinations and erratic reasoning—like struggling to compare 9.9 and 9.11. Scaling exacerbates this without fixing it, as seen when GPT-4o's hallucination rate increased over predecessors. Marcus advocates for 'neurosymbolic' hybrids that blend neural networks with structured reasoning, but such solutions are years away from matching past scaling gains.

Article illustration 4

OpenAI CEO Sam Altman, whose promises of AGI are under scrutiny amid GPT-5's shortcomings. (Mattie Neretin/Sipa USA via AP Images)

The technical plateau has dire economic implications. Venture capitalists poured $110 billion into AI startups in 2024—42% of all Silicon Valley funding—creating 498 'AI unicorns' valued at $2.7 trillion. Yet without AGI's exponential growth, these valuations crumble. OpenAI exemplifies the risk: Despite targeting $20 billion in revenue this year, it projects $5 billion in losses, with break-even requiring an improbable $100 billion by 2029. Zitron dubs this 'the rot economy'—businesses that 'burn billions to lose billions.'

'If rapid progress from scaling has ended, the world-historical bets investors have made on AI will begin to sour,' notes the Prospect report. 'The U.S. economy is dangerously dependent on Big Tech, with the 'Magnificent Seven'—Apple, Microsoft, Nvidia, Tesla, Meta, Alphabet, and Amazon—now comprising 34% of the S&P 500.'

Infrastructure investments face similar peril. Projects like OpenAI's $500 billion 'Stargate' data center assume limitless demand for AGI-level compute. But as Morgan Stanley reports, AI infrastructure spending could hit $3 trillion by 2028—a figure unsustainable without AGI returns. History offers grim parallels: During the dot-com bubble, Cisco's stock soared 17,000% as a 'picks and shovels' supplier, only to crash 80% when the market corrected. Nvidia, today's parallel with 92% GPU market share, has seen 3,653% growth since 2020; a similar fall could trigger sector-wide contagion.

Tesla, another AGI darling, looks equally vulnerable. Its 188 price-to-earnings ratio (versus 27 for rival BYD) hinges on Musk's promises of robotaxis and humanoid robots—all scaling-dependent. Yet with six straight quarters of declining sales and no AGI in sight, Tesla's core business is 'rotting,' according to financial analyst Gordon Johnson. Broader economic vulnerabilities loom, too: In early 2025, Big Tech's AI spending contributed more to U.S. GDP growth than consumer spending, leaving markets exposed if enthusiasm wanes.

The AI industry now stands at a precipice. Scaling's end demands a pivot to hybrid approaches like neurosymbolic AI, but this resets the innovation clock. Meanwhile, the 'rot economy' of hype—from NFTs to Theranos—has ensnared pensions, sovereign funds, and retail investors alike. As the Prospect warns, a collapse could dwarf the Great Recession, with 'trillions in dead capital' and no bailouts in sight. For developers and tech leaders, the lesson is clear: Build for incremental value, not AGI fairy tales. The future of AI may depend on it.

Source: Adapted from 'What If There’s No AGI?' by Bryan McMahon, The American Prospect.