The AI Job Displacement Debate: Why Comparative Advantage Isn't Enough
#AI

The AI Job Displacement Debate: Why Comparative Advantage Isn't Enough

Startups Reporter
11 min read

A detailed response to David Oks' essay on AI job displacement, arguing that even if comparative advantage preserves some human labor, workers could still face wage pressure, pipeline collapse, and surplus capture by capital owners.

David Oks recently published a well-written essay arguing that current panic about AI job displacement is overblown. While I agree with a few of his premises (and it's nice to see we're both fans of Lars Tunbjörk), I disagree with most of them and arrive at very different conclusions.

My main claim is simple: it is possible for Oks to be right about comparative advantage and bottlenecks while still being wrong that "ordinary people don't have to worry." A labor market can remain "employed" and still become structurally worse for workers through wage pressure, pipeline collapse, and surplus capture by capital.

I'm writing this because I keep seeing the same argumentative move in AI-econ discourse: a theoretically correct statement about production gets used to carry an empirical prediction about broad welfare. I care less about the binary question of "will jobs exist?" and more about the questions that determine whether this transition is benign: how many jobs, at what pay, with what bargaining power, and who owns the systems generating the surplus.

1. Comparative Advantage Preserves Human Labor

Oks brings the Ricardian argument that labor substitution is about comparative advantage, not absolute advantage. The question isn't whether AI can do specific tasks that humans do, but whether the aggregate output of humans working with AI is inferior to what AI can produce alone.

I think this framing is directionally correct as a description of many workflows today. We are in a "cyborg era" where humans plus AI often outperform AI alone, especially on problems with unclear objectives or heavy context.

But comparative advantage tells you that some human labor will remain valuable in some configuration, but nothing about the wages, number of jobs, or the distribution of gains. You can have comparative advantage and still have massive displacement, wage collapse, and concentration of returns to capital.

A world where humans retain "comparative advantage" in a handful of residual tasks at a fraction of the current wages is technically consistent with Oks' framework, but obviously is worth worrying about and is certainly not fine.

Another issue: the comparative advantage framing implicitly assumes that most laborers have the kind of tacit, high-context strategic knowledge that complements AI. The continuation of the "cyborg era" presupposes that laborers have something irreplaceable to contribute (judgment, institutional context, creative direction).

I agree with this for some jobs, but it's not enough for me to avoid being worried about job loss. Under capitalism, firms are rational cost-minimizers. They will route production through whatever combination of inputs delivers the most output per dollar.

Oks and David Graeber's "Bullshit Jobs" thesis agree that organizations are riddled with inefficiency, and many roles exist not because they're maximally productive but because of social signaling and coordination failures. Oks treats this inefficiency as a buffer that protects workers.

But if a significant share of existing roles involve codifiable, routine cognitive tasks, then they're not protected by comparative advantage at all. They're protected by social capital and organizational friction, the latter of which I believe will erode.

2. Organizational Bottlenecks Slow Displacement

This is the strongest part of the essay and overlaps substantially with my own modeling work. The distance between technical capability and actual labor displacement is large, variable across domains, and governed by several constraints independent of model intelligence.

The point about GPT-3 being out for six years without automating low-level work is good empirical evidence, though I don't agree that GPT-3 or GPT-4 era models could automate customer service (they would need tool usage, better memory, and better voice latency to do that).

Where the analysis is lacking is in treating bottlenecks as if they're static features of the landscape rather than obstacles in the path of an accelerating force. Oks acknowledges that they erode over time but doesn't discuss the rate of erosion or that AI itself may accelerate their removal.

In my own modeling, I estimate organizational friction coefficients for different sectors and job types. The bottleneck argument is strong for 2026-2029, but I think it's considerably weaker for 2030-2034.

Oks brings up the example of electricity taking decades to diffuse but admits that the timeline isn't similar. I would agree, it's not similar, and the data is increasingly pointing towards a compressed S-curve where adoption is slow until it isn't.

3. Intelligence Isn't the Limiting Factor

Oks writes as though we haven't seen meaningful displacement yet. I would say we have, within the limited capabilities of models today. Beyond the entry-level crisis, displacement is already hitting mid-career professionals across creative and knowledge work.

See reports on illustrators and graphic designers, translators, copywriters, and explicitly AI-related corporate layoffs. The models doing this aren't even particularly good yet. These losses are happening with GPT-4-class and early GPT-5-class models; models that still hallucinate, produce mediocre long-form writing, can't design well, and can't reliably handle complex multi-step reasoning.

If this level of capability is already destroying illustration, translation, copywriting, and content creation, what happens when we reach recursive self-improvement?

There needs to be some more investigative work to see how displaced designers/translators/copywriters etc. are reskilling and finding new work, but I would estimate it's extraordinarily difficult in this job market.

Notice the distributional pattern: it's not the creative directors, the senior art directors, or the lead translators with niche expertise getting hit. It's everyone below them; the juniors, the mid-career freelancers, the people who do the volume work.

Oks' comparative advantage argument might hold for the person at the top of the hierarchy whose taste and judgment complement AI, but it offers no comfort for the twenty people who work below that person.

Then, we'll consider the capabilities overhang. We haven't even seen models trained on Blackwell-generation chips yet, and models are reaching the ability to build their next upgrades. Massive new data centers are coming online this year.

Oks' point about "GPT-3 being out for 6 years and nothing catastrophic has happened" is looking at capabilities from 2020–2025 and extrapolating forward, right before a massive step-change in both compute and algorithmic progress hits simultaneously.

The river has not flooded but the dam has cracked.

4. Elastic Demand Will Absorb Productivity Gains

Oks argues that demand for most of the things humans create is much more elastic than we recognize today. As a society, we consume all sorts of things—not just energy but also written and audiovisual content, legal services, "business services" writ large—in quantities that would astound people living a few decades ago.

I believe this is real for some categories of output but cherry-picked as a general principle. Software is Oks' central example, and it's well-chosen: software is elastic in demand because it's a general-purpose tool.

But does anyone believe demand for legal document review is infinitely elastic? For tax preparation? For freelance video editors? These are bounded markets where productivity gains translate fairly directly to headcount reductions.

Let's consider a concrete case: AI video generation. Models like Veo 3.1 and Seedance 2.0 are producing near-lifelike footage with native audio, lip-synched dialogue, and automated editorial judgment. Users upload reference images, videos, and audio, and the model assembles coherent multi-shot sequences matching the vibe and aesthetic they're after.

The U.S. motion picture and video production industry employs roughly 430,000 people—producers, directors, editors, camera operators, sound technicians, VFX artists—plus hundreds of thousands more in adjacent commercial production.

The pipeline between "someone has an idea for a video" and "a viewer watches it" employs an enormous intermediary labor force.

Oks' elastic demand argument would predict that cheaper video production simply means more video, with roughly equivalent total employment. And it's true that demand for video content is enormous—McKinsey notes the average American now spends nearly seven hours a day watching video across platforms.

But I would challenge his thesis: is the number of people currently employed between producer and consumer equivalent to the number who will be needed when AI collapses that entire intermediary layer?

When a single person with a creative vision can prompt Seedance/Veo/Sora into producing a polished commercial that once required a director, cinematographer, editor, colorist, and sound designer, does elastic demand for the output translate into elastic demand for the labor?

People now can produce polished AI anime for about $5-$100. This content exists but the workforce does not.

So, yes, there will be vastly more video content in the world. But the production function has changed; the ratio of human labor to output has shifted by orders of magnitude. The demand elasticity is in the content, not in the labor.

To summarize: Jevon's paradox in aggregate output is perfectly compatible with catastrophic distributional effects. You can have more total economic activity and still have millions of people whose specific skills and local labor markets are destroyed.

The people being displaced right now are not edge cases, they're illustrators, translators, copywriters, graphic designers, video producers, and 3D artists who were told their skills would always be valuable because they were "creative."

The aggregate framing erases these people, and it will erase more.

5. We'll Always Invent New Jobs From Surplus

This is an argument by induction: previous technological transitions always generated new employment categories, so this one will too. The premise is correct, the pattern is real and well-documented.

I don't dispute it. The problem is the reference class issue. Every previous transition involved humans moving up the cognitive ladder, from physical labor to increasingly abstract cognitive work.

Oks mentions this—agricultural automation pushing people into manufacturing, then manufacturing automation pushing people into services, then service automation pushing people into knowledge work. The new jobs that emerged were always cognitive jobs.

This time, the cognitive domain itself is being automated.

I don't think this means zero new job categories will emerge. But Oks' assertion that "people will find strange and interesting things to do with their lives" doesn't address three critical questions: the transition path (how do people actually get from displaced jobs to new ones?), the income levels (will new activities pay comparably to what they replace?), and ownership (will the surplus that enables those activities be broadly shared or narrowly held?).

There's also the entry-level → senior pipeline problem I mentioned earlier. The gesture toward "leisure" as an eventual end state is telling. If human labor really does become superfluous, that's not a world where "ordinary people" are okay by default, but rather a world where the entire economic operating system needs to be redesigned.

Oks treats this as a distant concern. I'd argue it's the thing most worth worrying about, because policy needs to be built before we arrive there, not after.

6. What's Missing

The deepest issue with Oks' essay is the framing, rather than his individual claims. His entire analysis is labor-centric: will humans still have jobs?

I think this is assuredly worth asking, but also incomplete. The right question is: who captures the surplus? Is that worth worrying about?

If AI makes production 10x more efficient and all those gains flow to the owners of AI systems and the capital infrastructure underlying them, then "ordinary people" keeping their jobs at stagnant or declining real wages in a world of AI-owner abundance is not "fine."

It's a massive, historically unprecedented increase in inequality. The comparative advantage argument is perfectly compatible with a world where human labor is technically employed but capturing a shrinking share of value.

This is what I've been working on in an upcoming policy document—the question of how ownership structures for AI systems will determine whether productivity gains flow broadly or concentrate narrowly.

Infrastructure equity models, worker ownership structures, structural demand creation—these are the mechanisms that determine whether the AI transition is benign or catastrophic.

Oks' thesis has no apparent answer to the question.

Oks is right that thoughtless panic could produce bad regulatory outcomes. But complacent optimism that discourages the hard work of building new ownership structures, redistribution mechanisms, and transition support is equally dangerous, and arguably more likely given how power is currently distributed.

Benign outcomes from technological transitions have never been the default. They've been the product of deliberate institutional design: labor law, antitrust enforcement, public education, social insurance.

I don't think we should be telling people "don't worry". We should worry about the right things. Think seriously about who will own the systems that are about to become the most productive capital assets in human history, and pay attention to whether the institutional frameworks being built now will ensure you share in the gains.

The difference between a good outcome and a bad one is about political economy and ownership, and history suggests that when we leave that question to the default trajectory, ordinary people are the ones who pay.

Comments

Loading comments...