A software engineer reflects on how his skepticism toward AI tools transformed into daily reliance, exploring the tribalism that has polarized the debate and the practical realities that defy simplistic narratives.

The battle lines are drawn, and the discourse has hardened into something resembling political tribalism. On one side, the evangelists proclaim AI as the inevitable future of software development. On the other, the skeptics warn of hallucinations, security risks, and the devaluation of human expertise. Caught in the middle are developers like me, who have watched the technology evolve from a curiosity into a daily tool, and who now find themselves navigating a landscape where the old certainties no longer apply.
My own journey mirrors this shift. A year ago, I viewed large language models as amusing toys—hyperactive children barfing gobbledygook into an IDE. The crypto bros who had been hawking monkey JPEGs were suddenly praising AI, and upper management's euphemistic suggestions that we "learn these tools" felt like a threat: you are expendable. The smartest engineers I knew were unimpressed, and I agreed with them. The autocomplete was slow and buggy; I could type faster and make fewer mistakes.
Something changed in 2025. Whether it was the release of Opus 4.5, advances in reinforcement learning, or the clever design of Claude Code, a threshold was crossed. Suddenly, it made more sense to write a markdown specification, work with the AI in plan mode to refine it, and let it handle the busywork. The bugs remained, but so did the solutions. Cursor Bugbot would find vulnerabilities I never would have considered, and Claude would fix them. The question became unavoidable: what is my job as a programmer anymore?
This is where the tribalism becomes counterproductive. The debate has devolved into politics, and politics has devolved into tribalism. We have the hucksters selling ready-built solutions, the doomsayers crying the end of software development, and the holdouts insisting the entire house of cards is on the verge of collapsing. The truth, as I see it, is that nobody knows anything. The most honest position is one of uncertainty.
The practical reality is that AI tools are already reshaping workflows in ways that defy simple categorization. Security? I've had agents find vulnerabilities. Performance? They write benchmarks, run them, and iterate on solutions. Accessibility? They're dumb at that—until you give them a browser to check their work and the magic word "accessibility," at which point they often outperform the median web developer. The pattern is clear: when the tools are given the right constraints and feedback loops, they can exceed human performance in specific domains.
Yet the tribal mindset obscures this nuance. The engineers who once dismissed AI are now clinging to their hard-won knowledge, while the evangelists oversell capabilities. Both sides miss the middle ground: AI is neither a panacea nor a catastrophe. It's a tool that, when used thoughtfully, can augment human capability. The breakthrough isn't some future event—it's already here, in the form of systems that can chain together agents, fact-check each other, and produce working code from specifications.
The inefficiency is undeniable. Running multiple models to fact-check each other is wasteful. The environmental cost is real. But if it's cheaper than a developer's salary and "good enough," the history of software development suggests it will happen anyway. The last half-century is littered with examples of tools that were initially dismissed as inefficient or inferior but eventually became standard.
What breaks my heart is seeing fellow developers bury their heads in the sand, refusing to acknowledge what's in front of them. Many are scared, confused, or uncertain, and the tribal battle lines have clouded our judgment. We inhabit different worlds where the technology is either better or worse—I still don't think LLMs are great at UI work, for example—and this creates unhelpful discourse.
My advice, for what it's worth, is to experiment, tinker, and remain curious. Software development is unrecognizable from where it was three years ago, and we have no idea where it will be three years from now. The ride will be bumpy for everyone. The most useful thing we can do is have empathy for our fellow passengers, regardless of which tribe they belong to.
The future isn't a binary choice between human and machine. It's a collaboration, messy and imperfect, where the boundaries between author and tool blur. The question isn't whether AI will change software development—it already has. The question is how we adapt, how we maintain our humanity in the process, and how we build systems that are better than what either humans or machines could create alone.
This isn't about surrendering to the machines or clinging to the past. It's about recognizing that the most interesting work happens in the space between certainty and skepticism, between fear and enthusiasm. That's where the real innovation lies—not in the tools themselves, but in how we choose to use them.
The tribalism will continue. The thinkpieces will keep coming. But for those of us in the trenches, the work goes on. We experiment, we learn, we adapt. We build systems that are smarter than we are, and we learn from their mistakes. We question our assumptions, and we remain open to the possibility that the future might look different from what we imagine.
In the end, that might be the most valuable skill of all: the ability to hold contradictory ideas in mind, to acknowledge uncertainty, and to keep building anyway. The tools will change. The paradigms will shift. But the fundamental challenge remains the same: to create something useful, something elegant, something that makes the world a little better than we found it. The rest is just noise.

Comments
Please log in or register to join the discussion