#AI

Blood In The Machine: Luddites, AI, and the Eternal Struggle Over Labor

Frontend Reporter
3 min read

Brian Merchant's 'Blood In The Machine' draws striking parallels between the Luddite movement and today's AI revolution, revealing how technology has always been a tool for consolidating power rather than an inevitable force of progress.

Brian Merchant's Blood In The Machine offers a compelling historical lens through which to view our current technological moment, particularly the rise of artificial intelligence. The book's central thesis—that technology itself is neutral while its deployment determines its impact—resonates powerfully with contemporary debates about AI in the workplace.

The Luddite movement, often mischaracterized as anti-technology, was actually a resistance to the use of machines in ways that harmed workers and communities. As one historian quoted in the book notes, "If workmen disliked certain machines, it was because of the use that they were being put, not because they were machines or because they were new." This distinction feels particularly relevant today as we grapple with AI's role in creative industries, software development, and knowledge work.

What strikes me most about Merchant's analysis is how it reframes our understanding of technological disruption. The Luddites didn't smash machines out of irrational fear of progress—they destroyed "machinery hurtful to commonality," specifically targeting equipment that "tore at the social fabric, unduly benefitting a singly party at the expense of the rest of the community." Sound familiar?

This pattern repeats throughout history. Richard Arkwright, often celebrated as an industrial pioneer, didn't actually invent most of the technology he's credited with. As Merchant points out, his "innovation" was "pieced together from the inventions of others." What Arkwright truly innovated was the system of factory work itself—creating a model where human labor was subordinated to machine rhythms and capital's demands.

Andrew Ure, an early business theorist, captured this perfectly when he noted that Arkwright's "main difficulty" wasn't inventing the machinery, but rather "training human beings to renounce their desultory habits of work and to identify themselves with the unvarying regularity of the complex automaton." This quote could describe modern tech companies' approach to AI implementation almost verbatim.

The book makes a crucial point about labor exploitation that feels ripped from today's headlines. Arkwright's "secret sauce" wasn't technological innovation but rather "labor exploitation"—a model that, as Merchant observes, "hasn't changed much" in the centuries since. The playbook remains remarkably consistent: identify profitable technologies, appropriate others' ideas, and deploy them with "unmatched aggression and shamelessness."

This brings us to our current AI moment. CEOs enthusiastically promoting AI tools for their employees isn't about productivity or innovation—it's about leverage. As the book notes, those who deploy automation "can use it to erode the leverage and earning power of others, to capture for themselves the former earnings of a worker." The enthusiasm for AI in corporate America makes perfect sense when viewed through this lens.

What's particularly insidious is how this dynamic has shifted our value system. Merchant observes that "respect for the natural rights of humans has been displaced in favor of the unnatural rights of property." We've moved from a world where human dignity and community well-being were primary concerns to one where property rights and shareholder value dominate.

The parallel to AI is striking. My concern isn't that AI will achieve consciousness and enslave humanity—it's that AI is being deployed in ways that consolidate power and wealth into fewer hands at the expense of the many. Just as the Luddites recognized that machines could be used to undermine community welfare, we need to critically examine how AI is being implemented today.

The lesson from Blood In The Machine isn't to reject technology but to question its deployment. Who benefits? Who loses? What social fabric is being torn? These questions were relevant in 1811, and they're equally relevant in 2024 as we navigate the AI revolution. The machines themselves aren't the problem—it's the systems we build around them and the values those systems prioritize.

Perhaps the most valuable insight is that technological change isn't inevitable or neutral. It's shaped by human choices, power structures, and economic incentives. Understanding this history helps us make better choices about the technological future we're building—before we need our own version of the Luddite movement to course-correct.

Comments

Loading comments...