Dario Amodei's lengthy manifesto about superintelligent AI doubles as a thinly veiled argument against regulation, revealing that the real concern isn't AI itself but the potential constraints on AI companies' growth and influence.
Dario Amodei, CEO of Anthropic, has published a 22,000-word essay warning about the existential risks of superintelligent AI—a technology that doesn't yet exist. The essay serves as both a cautionary tale and a strategic argument against regulatory intervention, revealing the complex motivations behind AI industry leaders' public statements.

The timing is particularly interesting given Amodei's recent prediction that AI would be writing 90% of code within three to six months—a forecast that hasn't materialized. Human developers still have jobs, and the AI bubble shows signs of slowing down.
The Real Threat: Regulation, Not AI
Amodei's essay frames AI as an existential threat while simultaneously arguing against the very regulations that might mitigate those risks. This creates a paradox: if AI truly poses the dangers he describes, why resist oversight?
The answer becomes clear when examining the financial realities of AI companies. Anthropic isn't expected to become profitable until 2028, while OpenAI projects profitability in 2030 after burning "roughly 14 times as much cash as Anthropic," according to the Wall Street Journal.
What Actually Kills People
When examining real-world mortality data from 2023, AI doesn't appear on the list of leading causes of death:
- Circulatory diseases (heart disease): 28.5%
- Neoplasms (cancer): 22.0%
- External causes (including suicide): 7.0%
Suicide, which accounts for 2.1% of deaths, may actually worsen with AI's involvement in mental health applications. Meanwhile, global concerns tracked by polling company Ipsos show AI isn't even on the radar:
- Crime and violence: 32%
- Inflation: 30%
- Poverty and social inequity: 29%
- Unemployment: 28%
- Financial/political corruption: 28%
The Wealth Concentration Problem
Amodei does identify legitimate concerns about wealth concentration. He notes that Elon Musk's $700 billion net worth already exceeds the ~2% of GDP that John D. Rockefeller's wealth represented during the Gilded Age. The essay speculates about future fortunes in the trillions, driven by AI company valuations.
However, this wealth concentration isn't an inevitable consequence of AI technology—it's a policy choice. We can decide whether creative work can be captured and resold without compensation, whether governments should subsidize model development, and whether to impose liability on model makers for misuse.
The China Card
Amodei advocates for a geopolitical approach that focuses on denying China access to powerful chips and semiconductor manufacturing equipment. This strategy aims to slow autocracies' progress toward powerful AI while avoiding domestic regulation.
This framing reveals the underlying motivation: protecting market position and growth potential rather than addressing genuine safety concerns. The essay argues for "limited rules while we learn whether or not there is evidence to support stronger ones"—a position that conveniently delays meaningful oversight.
The Predictable AI Paradox
Ironically, while warning about superintelligent AI making unexpected decisions, the industry has been working to make AI more predictable and controllable. When AI agents gained attention last year, the goal was to constrain behavior, make agents subservient rather than independent, and prevent them from deleting files or posting passwords online.
This contradiction exposes the hollowness of the superintelligence argument. No one actually wants unpredictable AI—they want controllable systems that generate profit while avoiding liability.
The Real Danger
The most pressing threats from AI aren't about superintelligence but about:
- Billionaires drowning democracy in AI-generated misinformation
- Market concentration and monopolistic control
- Job displacement without adequate social safety nets
- Environmental costs from datacenter construction
- Privacy erosion through mass data collection
Amodei's essay ultimately reads as a sophisticated argument against regulation dressed up as a warning about existential risk. The real message is clear: AI companies want to grow unchecked, avoid liability, and maintain their market dominance.
As the AI bubble potentially slows, we have an opportunity to focus on these tangible problems rather than hypothetical superintelligence scenarios. The choice isn't between unregulated growth and AI apocalypse—it's between democratic control of powerful technology and letting billionaires decide our future.

Comments
Please log in or register to join the discussion