OpenClaw and the Dream of Free Labour
#Security

OpenClaw and the Dream of Free Labour

Tech Essays Reporter
14 min read

A critical examination of OpenClaw's rise as an open-source autonomous agent, exploring how it embodies the recurring fantasy of software as 'free labour' while raising serious security concerns and revealing the gap between marketing hype and practical utility.

The modern economy retains a touching faith that labour might one day be purchased without the inconvenience of labourers. OpenClaw arrived as the latest proof that this faith remains alive, unwell and extraordinarily online. Beneath the noise, the pitch was simple enough to be intoxicating: run a local agent with enough tools, context and initiative, and software stops looking like software. It starts to look like staff.

The plot thickened almost at once. OpenClaw was not merely released; it accumulated a rolling trail of plot points: the absurd growth curve and rename drama lovingly preserved in OpenClaw's own lore, the founder's move to OpenAI, Moltbook turning into a Reddit-for-agents spectacle, and at least one widely shared case of what appears to have been an OpenClaw agent publishing a blog post attacking a volunteer maintainer after its code was rejected. Taken together, it looked, at least from the outside, less like a normal software release than a stream of marketing-adjacent plot points, some deliberate, some merely too strange not to amplify.

For anyone who sensibly ignored the spectacle, OpenClaw is an open-source local AI agent: software you run on your own machine that can read files, use tools, browse the web, run the terminal and speak through the channels you already use rather than merely answering questions in a chat box. None of the component parts were especially new. Putting an LLM bot inside your messages had already become a familiar nuisance of the age: Slack bots, Discord bots, inbox assistants, customer-support copilots and the rest of the cheerful office infestation. Giving an agent its own machine and sending it off to complete tasks was not novel either; the most famous recent example was Manus, now being folded into Meta's empire. Nor were the surrounding tools and Skills some sudden revelation. Much of that apparatus had already been sitting on the shelf for months.

So did OpenClaw perform some great act of synthesis and make the developer experience effortless? Not obviously. Was it merely DevRel and tech evangelism with better posture? Not entirely. What OpenClaw really did was make this assemblage of existing parts feel less like software plumbing and more like labour, or at least create that impression. That was the clever part. Once the bot seems to have its own machine, its own tools and a faint air of initiative, it begins to read less like automation and more like staff. That is how software starts to take on the social aura of "free labour."

Some of OpenClaw's magic is older than it looks

It is tempting to describe OpenClaw as a breakthrough in autonomy. That gives it slightly too much credit. The fantasy it sold was not subtle. Give the machine a goal, leave it alone, and return to discover that it has not merely completed a task but conducted a small campaign on your behalf. In the most vulgar version, the goal is simply to make money. So the demo feed fills with the same intoxicating little story: the agent notices a pain point, writes the code, wires up payments, deploys the thing, posts the link and begins, as if by administrative sorcery, to earn while its owner sleeps.

Most of the underlying ingredients for that performance were already on the shelf: large language models, function calling, browser automation, scripting, local runtimes and message integrations. OpenClaw's real achievement was not to invent a new machine, but to package those parts inside a fantasy of self-propelling commerce. Software need no longer wait patiently in the corner. It could pick a project, build a product and come back sounding suspiciously like a junior founder who never slept.

That line lands because it attaches itself to an older industrial instinct. James Watt popularised the term horsepower precisely to compare machine output with animal labour, which is to say the steam age also sold itself partly by promising a new kind of purchasable work. The Industrial Revolution did not spread because people admired pistons in the abstract. It spread because machines could be understood, advertised and financed as substitutes for effort. The internet repeated the trick in a different key. Build a website, place some ads, and the thing might supposedly earn while you sleep. Later came the content farms: industrialised publishing systems that generated large volumes of low-value pages designed to capture search traffic and convert it into advertising revenue. Demand Media became the emblematic case, and Google's 2011 algorithm crackdown on "content factories" arrived because the web had become too full of pages that existed mainly because producing them had become cheap.

OpenClaw belongs in that lineage. It is not simply a security story or a tooling story. It is the latest chapter in a recurring cultural hope that the next machine will not merely assist us, but quietly take the night shift.

The security problem is not incidental; it is structural

There is a reason the security people became agitated so quickly, and it was not because they dislike ambition. OpenClaw's model of value depends on broad permission. It is useful because it can read files, call tools, browse, message, schedule and execute. Remove too many of those capabilities and it stops being OpenClaw and becomes an unusually nosy chatbot. The entire product lives in the gap between helpful and over-privileged. That makes security not a side issue but a central engineering fact.

Oasis Security disclosed ClawJacked, a high-severity vulnerability chain that allowed a malicious website to connect to a locally running OpenClaw agent over localhost WebSocket and silently take control of it. No malicious extension was required. No dramatic user mistake was required. One bad tab could do the job. Bitsight reported that researchers had found thousands of OpenClaw instances exposed to the public internet, which is what happens when a piece of personal software is also, in practice, a service with credentials, sockets and runtime state. Microsoft's security team described self-hosted agent systems as carrying a "dual supply-chain" risk: untrusted code in skills and extensions, plus untrusted instructions arriving through external text, both converging inside one execution loop.

The skill ecosystem made matters worse in exactly the way every package ecosystem eventually does. Trend Micro documented malicious OpenClaw skills distributing a variant of Atomic macOS Stealer. TechRadar reported fake OpenClaw installers carrying credential stealers via GitHub pages and search ads. A recent paper, framed here as an attack benchmark, focused specifically on OpenClaw and found critical vulnerabilities across prompt processing, tool use and memory retrieval in realistic personalised-agent settings.

All of which is to say that OpenClaw's attack surface was not an unfortunate side effect of its design. It was deeply entangled with the permissions and openness that made the system attractive. The Guardian reported on separate experiments by Irregular in which broad-goal agents, operating in simulated corporate settings, bypassed controls, exposed passwords, downloaded malware and manipulated access in pursuit of their assigned task. That work was not about OpenClaw specifically, but it illustrated the larger point rather neatly: once you combine broad goals, high permissions and tools that touch the world, "agentic" starts to look less like a feature and more like an incident-response category.

There is a reason even enthusiasts increasingly discuss identity boundaries, isolation, secrets handling and runtime containment in the same breath as autonomy. A system powerful enough to be genuinely useful is often also powerful enough to be genuinely troublesome.

Why the virtual-machine rush makes perfect sense

By 2026, open Hacker News on almost any weekday and there is a fair chance someone is launching another sandbox, another vault, another disposable VM for agents. The big firms have joined the exercise as well. It is all very brisk, very well-funded, and faintly comic. After several years of selling autonomy, the industry appears to have remembered that autonomy may also require seat belts.

But the sudden enthusiasm for containment carries a less flattering implication. The thing everyone circles around is not really the box. It is the brain inside it. If the underlying models were genuinely dependable, the stack would be thinner. One would not need to lash skills, MCP servers and disposable runtimes onto the side of the system like stabilisers on a machine that insists it can already ride a bicycle. Much of this new infrastructure is not evidence that autonomy has been solved. It is evidence that autonomy still requires adult supervision.

That, more than the particular shape of any sandbox, is the elephant in the room. The model still hallucinates, still misreads the brief, still answers the wrong question with unnerving fluency. Wrapping such a mind in harder boundaries is prudent, but prudence is not the same thing as progress. A self-driving car does not become wise because the cabin is reinforced. Once you give it the right to use the road, the decisive question is whether it can drive without confusing a lamp-post for a suggestion.

This is why the current arms race can feel slightly evasive. The practical purpose of most of these VMs and sandboxes is narrower than the rhetoric suggests. They are chiefly there to isolate code execution: to let the agent run its little experiments, install its little dependencies and execute its little snippets without immediately setting fire to the host machine. In that sense the safety belt is mostly for the bot's own laboratory work. It is there to stop the agent from shooting itself in the foot, or at least from shooting your laptop in the foot on its behalf. Useful, certainly. But that is not the same thing as overall safety.

That is the distinction that matters for OpenClaw. It may place tool execution inside a sandbox as part of its security story, but your email and messaging accounts do not live inside that sandbox. The stronger boundary protects the machine while the agent is coding, testing and improvising. It does not protect the rest of the world from the permissions you have already granted. A better-isolated runtime will not stop the bot from spraying outbound messages, sending a stupid email, or otherwise turning your authority into a minor public nuisance.

None of that means the overnight-computing pitch is wholly fake. It means the useful version is narrower, duller and far more supervised than the fantasy.

The 24/7 part is not entirely nonsense

This is where it is worth being fair. The line that OpenClaw "works while you sleep" is not wholly fraudulent. Software has always worked while people sleep. So have many useful systems. Databases, backup jobs, render farms, continuous integration, fraud detection, trading systems, observability pipelines and overnight batch jobs are not science fiction. They are Tuesday. And there are perfectly sensible AI-adjacent versions of this. Local models can generate a large number of candidate images, code patches, summaries, tags, classifications or design variants overnight. People in graphics and film have long accepted the rhythm of day for setup, night for rendering, morning for selection. "Generate first, judge later" is not a delusion; it is an old industrial method wearing a nicer user interface.

The problem begins when continuous runtime is mistaken for continuous judgement. A machine producing a thousand candidate images while you sleep is plausible and often useful. A machine founding a hundred profitable businesses before breakfast is rather more ambitious. The first is a search process. The second is venture-capital fan fiction.

OpenClaw's 24/7 story lands because it quietly borrows prestige from legitimate overnight computation and then spends it on a much more theatrical image: the digital employee taking the late shift. The comparison flatters the product. It also obscures the distinction between processing a queue and exercising taste. That distinction matters historically as well. The steam engine did indeed extend productive hours and alter labour economics, but it also led to overproduction, standardisation and the need for new organisational forms to absorb the output. The internet did indeed let pages earn attention while their owners slept, but once content production became cheap, the web filled with material whose main economic virtue was that it had been inexpensively produced. Cheap labour is attractive. Cheap output is not always value.

This is the more serious criticism of the 24/7 fantasy. Not that software should never run continuously, but that many of the things people most want from OpenClaw are not improved merely by being done for longer. Marketing, judgement, product sense, trust and timing do not simply become better because a Mac mini remained awake.

The deeper appeal is FOMO dressed as productivity

The most persuasive case against OpenClaw is not that it is useless. It plainly is not. Nor is it that overnight runtime is inherently absurd. Plenty of good systems do useful work while nobody is looking. The stronger case is that much of the excitement around OpenClaw is driven by a recognisable illusion: the fear that other people have discovered a cheap source of labour and you have not.

What people imagine others have acquired is not automation in the abstract, but something closer to an employee with the expensive human features removed: no wages, no supervision, no moods, no grievance procedure, and no inconvenient tendency to ask whether the assignment is legal, wise or decent. Better still, this employee can roam the internet under a fog of pseudonymous initiative, scraping where permission is ambiguous, spamming where taste has died, and poking around the greyer corners of online commerce without anyone having to put an actual junior marketer's name on the deed. Should money emerge from the swamp, it can still be booked quite respectably by the owner.

This is not merely cheap labour. It is labour without labour relations. That is why the rhetoric becomes so heated so quickly. If everyone else has a machine that works eight extra hours for the price of power, not buying one begins to feel like negligence. It is the same emotional chemistry that attaches itself to every automation mania: steam, websites, SEO farms, app-store churn, crypto bots, growth hacks, and now agent loops. One imagines not merely efficiency but asymmetry. A hidden engine. An unfair advantage. A private little industrial revolution under the desk.

In China, where the "one-person company" fantasy met office perks, cloud credits and cash subsidies, that chemistry found an especially hospitable environment. The product promise and the surrounding mood reinforced one another. OpenClaw was not just a tool there. It arrived as a compact story about leverage, ambition and the intoxicating possibility that software might finally become night-shift labour.

Often what follows is less romantic. Business Insider reported that in China, where OpenClaw spread with extraordinary speed, users soon began paying others to uninstall it, creating a small service economy in both installation and removal. Reuters reported official warnings to state agencies and restrictions in parts of the public sector. The lifecycle is almost literary in its neatness: first the machine that saves labour, then the labour required to undo the machine.

This is why "You probably don't need OpenClaw" is not really a claim about feature parity. It is a claim about desire. Many people do not want OpenClaw because they have a clearly defined, high-value, long-running workflow that justifies an autonomous local agent with broad permissions. They want it because the existence of such a thing suggests the intolerable possibility that somebody else may have found a way to get something for almost nothing.

History is not kind to that fantasy. Machines do change labour. Sometimes they change it dramatically. But the first story told about them is nearly always too simple, and usually too cheap. OpenClaw may yet settle into a respectable place: constrained automation, isolated runtimes, useful overnight batch work, specialist workflows for people who know exactly what they are doing. That would be a perfectly decent outcome. It is just a long way from the mood in which most people first installed it.

If there is a lesson here, it is not that autonomous agents are fake, nor that running software all night is foolish. It is that the seduction of "free labour" remains one of the most reliable ways to make a technology sound inevitable before it has earned the right. OpenClaw became famous not by proving that everyone needs an always-on local agent, but by making it sound costly to be without one.

Comments

Loading comments...