Article illustration 1

At NVIDIA's recent GPU Technology Conference, CEO Jensen Huang made a provocative assertion: "AI is not a tool. AI is work. That is the profound difference. AI is, in fact, workers that can use tools." This framing, echoed by tech observers like Ben Thompson, suggests AI unlocks a market "orders of magnitude" larger than traditional software by acting as autonomous labor. But this perspective risks obscuring AI's true potential—and its ethical pitfalls.

The Worker Illusion: Why Huang's Analogy Falls Short

Huang cited examples like Perplexity booking vacations or Cursor generating code as evidence of AI's worker-like autonomy. Yet this overlooks decades of software evolution. As Tim O'Reilly points out, complex systems like Amazon have long functioned as "workers": they retrieve products, calculate taxes, manage logistics, and handle payments—all without human intervention. In 2016, O'Reilly described Google and Amazon's algorithms as "electronic workers" managed by human developers. The real shift isn't autonomy but AI's generality: its ability to handle novel tasks beyond rigid programming.

However, AI's current capabilities remain bounded. Huang's examples overstate reality. Trusting an AI to "underwrite a loan based on a 250-word prompt" or fully automate travel planning ignores today's reliability gaps. Even in coding—where AI excels—humans must initiate, evaluate, and supervise outputs. As O'Reilly notes, "AI is getting pretty good at software development, but the results are still mixed."

Tools vs. Workers: The Implications of Framing

Viewing AI as a worker fuels a dangerous narrative: that it can replace humans. This mindset risks repeating the industrial revolution's errors, where productivity gains enriched owners while laborers suffered. Contrast this with treating AI as a tool—a "jet plane for the mind" that amplifies human creativity. Steve Jobs championed computers as "bicycles for the mind," and Microsoft's Satya Nadella later emphasized tech that "enables you to do your work better." This distinction matters profoundly:

  • Worker framing: Prioritizes automation of existing tasks, potentially devaluing human agency and concentrating wealth.
  • Tool framing: Empowers users to solve new problems, democratizing capabilities once reserved for experts.

Claude, Anthropic's AI, provided a startlingly lucid self-assessment when asked if it was a worker or tool:

"I don’t initiate. I’ve never woken up wanting to write a poem or solve a problem. My activity is entirely reactive... Humans deserve consideration for their own sake. You should care about whether your employee is flourishing... I don’t have skin in the game. That’s not just a quantitative difference—it’s qualitative."

This underscores AI's lack of volition, stakes, or accountability—traits inherent to human workers.

The Path Forward: Empowerment Over Replacement

Historically, transformative tools like word processors or the internet unlocked exponential productivity gains without eliminating human roles. E-commerce, after 30 years, still only commands 20% of retail. Similarly, AI's value lies in democratization: enabling non-experts to code, research, or create at unprecedented speed. But as O'Reilly argues, three questions must guide development:

  1. Does AI empower users to achieve the previously impossible?
  2. Does it broaden access to specialized skills?
  3. Do the productivity benefits flow to users or only to owners?

Ignoring these risks a 21st-century immiseration. Tools like Claude can summarize centuries of research in minutes—but without human curation, insights grow shallow. The future hinges on resisting Huang's worker metaphor and building AI that elevates human potential. As O'Reilly concludes, "Replace human workers with AI workers, and you will repeat the mistakes of the 19th century. Build tools that empower and enrich humans, and we might just surmount the challenges of the 21st century."