Thoughtworks' latest Technology Radar highlights the dominance of AI in software development, the resurgence of command-line interfaces, and critical security concerns around "permission-hungry" agents.
The 34th edition of Thoughtworks' Technology Radar has been released, offering a comprehensive survey of the technology landscape with 118 blips covering tools, techniques, platforms, and languages that have caught the company's attention. As expected, AI-oriented topics dominate this edition, but the analysis reveals deeper patterns about how AI is reshaping software development practices and forcing a reconsideration of fundamental principles.
AI as a Catalyst for Revisiting Fundamentals
One of the most interesting observations from this radar edition is that AI isn't just pushing development forward—it's also pulling us back to examine our foundations. While assembling this edition, the Thoughtworks team found themselves returning to established techniques like pair programming, zero trust architecture, mutation testing, and DORA metrics. They also revisited core principles of software craftsmanship: clean code, deliberate design, testability, and accessibility as a first-class concern.
This isn't nostalgia but a necessary counterweight to the speed at which AI tools can generate complexity. As Martin Fowler notes, "This is not nostalgia, but a necessary counterweight to the speed at which AI tools can generate complexity."
The Command Line Makes a Comeback
After years of abstracting away the command line in the name of usability, agentic tools are bringing developers back to the terminal as a primary interface. This represents a significant shift in how we think about developer experience and tool design. The resurgence suggests that as AI agents become more sophisticated, the directness and precision of command-line interfaces offers advantages that graphical interfaces cannot match.
Security Takes Center Stage
A major theme of this radar is securing "permission-hungry" agents. The term "permission hungry" describes a fundamental tension in the current agent moment: the agents worth building are the ones that need access to everything. Tools like OpenClaw and Claude Cowork supervise real work tasks, while Gas Town coordinates agent swarms across entire codebases. These agents require broad access to private data, external communication, and real systems—each arguing that the payoff justifies it.
However, the safeguards haven't caught up with this ambition. The appetite for access collides with unsolved problems. Prompt injection means models still can't reliably distinguish trusted instructions from untrusted input. This creates a dangerous gap between capability and security that the radar team is actively addressing.
The addition of Jim Gumbley to the writing team strengthens the security perspective, bringing years of experience including work on the Threat Modeling Guide. Having a strong security presence on the radar team is especially important given the serious security concerns around using LLMs.
The Harness Engineering Challenge
Many of this radar's blips are about Harness Engineering, with the radar meeting serving as a major source of ideas for Birgitta's excellent article on the subject. The radar includes several blips suggesting the guides and sensors necessary for a well-fitting harness. This metaphor captures the challenge of creating frameworks and guardrails that allow AI agents to be productive while remaining safe and controllable.
Code Quality in the Age of AI
Mike Mason's analysis of what happens when developers aren't reading the code provides a sobering counterpoint to the enthusiasm around AI-generated code. He describes a Python codebase produced by Claude that was largely working—unit tests passed, and a few hours of real-world testing showed it was successfully managing a fairly complex piece of infrastructure. But somewhere around 100KB of total code, he noticed something alarming: the main file had grown to about 50KB (2,000 lines), and Claude Code, when it needed to make edits, had started reaching for sed to find and modify code within that file.
This was a serious alarm bell. As well as the experience of "a friend," he ponders the 500,000 lines of Claude Code after the leak. Both things are true: there is good architecture in Claude Code, and there is also an incomprehensible mess. That's actually the point. You don't get to know which is which without reading the code.
His conclusion is a rough framework: throw-away analysis scripts are fine to vibe away, but tooling you need to maintain and durable code needs regular human review—even if it's just a human asking a model to evaluate the code with some hints as to what good code looks like. The moment you say "I'm getting uncomfortable with how big this is getting, can we do something better?" it does the right thing: sensible decomposition, new classes, sometimes even unit tests for the new thing. It knew, it just didn't volunteer it.
He does recommend being serious with CLAUDE.md, though I don't know if he's tried many of the patterns that Rahul Garg has recently posted to break the similar frustration loop that he saw.
Broader Implications
The radar touches on several other important themes. Dan Davies poses an annoying philosophy thought experiment for us to consider how we feel about LLMs indulging in ghost writing. This raises questions about authorship, creativity, and the nature of intellectual work in an AI-assisted world.
There's also a poignant reminder of what's been lost in the recent wave of government dismantling. DOGE dismantled many useful things during their brief period with the wood chipper, including DirectFile, a government program that supported people filing their taxes online. Don Moynihan has talked to many folks involved in Direct File and has penned a worthwhile essay that isn't just relevant to DirectFile and other U.S. government technology projects, but indeed any technology initiative in a large organization.
Moynihan highlights a paradox of government reform: the simpler a potential change appears, the more likely that it has not been implemented because it features deceptive complexity that others have tried and failed to resolve. I've heard that tale in many a large corporation too.
One way government initiatives are different is that, at its best, it's built on an attitude of public service. Many who worked on Direct File drew a sharp contrast with DOGE and their approach to building tech products. One point of distinction was DOGE's seeming disinterest in public interest goals and of the public itself: "if you do not think government has a responsibility to serve people, I think it draws into question how good are you going to be at making government work better for people if you just don't believe in that underlying principle."
The tragedy for U.S. taxpayers like me is that we've lost an effective way to go through the annual hassle of taxes. In addition, the IRS is much weaker—it's lost 25% of its staff and its budget is 40% below what it was in 2010. Much though we hate tax collectors, this isn't a good thing. An efficient tax system is an important part of national security; many historians consider the ability to raise taxes effectively was an important reason why Britain won its century-long struggle with France in the Eighteenth century. A wonky tax system is also a major reason why the French monarchy, so powerful at the start of that century, fell to revolution. Indeed, there is considerable evidence that increasing the budget of the IRS would more than pay for itself by increasing revenue.
The Technology Radar 34 thus presents a complex picture: AI is transforming development practices, but also forcing us to revisit our foundations; command-line interfaces are making a comeback; security concerns around AI agents are paramount; and the broader societal implications of technology decisions continue to unfold. As we look forward to the next radar in six months, the list of challenges and opportunities is likely to grow, but so too will our understanding of how to navigate this rapidly evolving landscape.

Comments
Please log in or register to join the discussion