Steven Deobald argues that the tech industry's obsession with AI-powered 'intelligence' mirrors the flawed design of airport motion-sensor sinks: systems that guess our intentions rather than letting us exercise clear, explicit control. He makes the case for 'Artificial StupidIntelligence'—software that embraces simplicity, reliability, and user agency over predictive complexity.
The term "Boring Technology" has long circulated in software development circles as a badge of honor—tools that are proven, stable, and unflashy. But Steven Deobald suggests this philosophy should extend beyond developer tools into every corner of our digital lives. The problem isn't just that we're building complex systems; it's that we're building complex systems that actively work against human agency.
Consider the airport sink. It represents a design philosophy that has infected modern software: the sink attempts to predict your needs rather than responding to explicit commands. You want to wash your hands. The sink wants to detect hand-washing motion. The result is a frustrating dance of waving hands, accidental shut-offs, and children unable to reach the sensor's detection zone. The sink has become an "intelligent" system that fails at its core function because it prioritizes automation over reliability.
This same pattern appears throughout modern software. Microsoft's Copilot transforms Office—software that reached functional maturity decades ago—into an AI-powered guessing machine. Liquid Glass drags perfectly capable iPhone hardware into performance collapse through visual flashiness. Zed, a promising text editor, markets itself primarily through AI features rather than core editing excellence. Each represents a system that has abandoned "done" in pursuit of "smarter."
The concept of "enshittification"—popularized by Cory Doctorow—describes how platforms degrade by shifting value from users to partners to shareholders. But Deobald identifies a parallel phenomenon: "intelligence creep," where systems gain complexity without gaining utility. Excel 2024 offers little meaningful improvement over Excel 2007 for most users, yet both Microsoft and users must pretend the ribbon interface and AI features represent progress. The software became "smarter" without becoming better.
This leads to a counter-proposal: Artificial StupidIntelligence. Not actual stupidity, but a deliberate embrace of "stupid" design patterns—systems that follow rules without trying to guess intent. A "stupid" program does exactly what it's told, every time, without interpretation. It doesn't need 256GB of RAM to render a web page. It doesn't require cloud services to save a file. It doesn't use machine learning to predict which button you might want to click.
The utility of this approach becomes clear when we examine what "done" software looks like. A paint program from 1987 still functions perfectly because it has one job: drawing pixels. It doesn't need subscriptions, telemetry, or AI-powered brush suggestions. A word processor from that era still writes documents without Clippy interrupting or cloud sync failing. These systems achieved their purpose and stopped, rather than continuously metastasizing into platforms.
But we needn't become retro computing enthusiasts. The principles of stupid programs apply to modern distributed systems. Use dumb pipes and open protocols instead of proprietary messaging layers. Save data to files that users control, not just databases users can't access. Build single-file binaries that run anywhere. Provide open APIs over open schemas. Make your UI simple enough to work on old hardware, but beautiful enough to be a joy.
This philosophy extends to business models. Instead of charging for quarterly features that nobody requested, charge for utility. Instead of building "integrations" that lock users into your ecosystem, provide open APIs that let users build their own connections. Instead of demanding users trust your cloud service, let them run their own backend and discover they probably don't want to.
The airport sink's fundamental flaw is that it removes agency. Users can't express "turn on" or "turn off"—they can only hope the sensor interprets their movements correctly. Modern AI-powered software risks the same failure. When Excel guesses which function you want, when Copilot suggests what to write, when Liquid Glass predicts which animations will delight you, they're all making the same bet: that prediction beats specification.
But the history of computing's greatest successes tells a different story. The command line, the file system, the spreadsheet, the text editor—these triumphed not because they were smart enough to guess our intentions, but because they were stupid enough to reliably execute our explicit commands. They gave us agency.
The choice between Artificial Intelligence and Artificial StupidIntelligence isn't really about intelligence. It's about whether we want systems that serve us or systems that guess about us. Airport sinks serve no one well, no matter how many sensors they contain. Sometimes the most sophisticated design is the one that gets out of the way and lets you wash your hands.

Comments
Please log in or register to join the discussion