The Real Bottleneck: Why Faster Coding Doesn't Speed Up Projects
#Dev

The Real Bottleneck: Why Faster Coding Doesn't Speed Up Projects

Startups Reporter
13 min read

Software development isn't slow because coding is slow—it's slow because understanding takes time. Companies optimize the wrong thing by focusing on accelerating code generation while ignoring the real bottleneck: generating understanding.

Companies constantly seek ways to speed up software development. They hire more developers, adopt new frameworks, and now turn to AI coding assistants promising dramatic productivity gains. But what if the fundamental assumption is wrong? What if the bottleneck was never the speed of writing code?

Featured image

For years, I've watched organizations invest in acceleration in precisely the wrong place. They optimize coding while the real problem remains unsolved: understanding what is supposed to be built in the first place.

The Myth of Slow Coding

There's a tacit assumption in many companies: software development takes so long because programming is complex. The faster code is generated, the faster the product is finished. This sounds plausible but doesn't hold up under scrutiny.

Ask experienced developers what they spend their time on. The answer might surprise you. The actual writing of code often makes up only a fraction of the work. The larger part is spent on:

  • Figuring out exactly what is supposed to be built
  • Understanding how the existing system works
  • Clarifying what the requirement really means
  • Correcting what was not understood correctly the first time

This isn't inefficiency. This is the nature of the work. Developing software means understanding a problem and translating that understanding into a formal language that a computer can execute. The translation step—coding—is the smaller part. The larger part is the understanding itself.

The time between a requirement and a finished feature is not long because typing is slow. It is long because understanding takes time. Because communication takes time. Because translating between the world of business logic and the world of code takes time.

Those who don't see this difference are investing in the wrong area.

Building the Wrong Thing Faster

What happens when the bottleneck is misdiagnosed? You optimize what you think is the problem and wonder why the results are not forthcoming.

I've seen companies that increased their teams to become faster. And the result was not more output but more coordination effort. Suddenly, more people had to be brought up to the same level. More meetings, more coordination, more misunderstandings.

Frederick P. Brooks' famous law ("Adding manpower to a late software project makes it later") is not a theoretical game. It describes what happens when you believe software is primarily created by labor.

I've also seen companies switch to new frameworks because the old ones were allegedly too slow. The result was not faster development but a learning curve that took months. The promised productivity increase never materialized because the actual issue—namely the lack of understanding of the business logic—was not solved by the new framework. It was merely shifted to a new technical level.

Currently, I'm seeing companies expecting a dramatic productivity increase from AI-powered coding assistants. These tools are impressive. They can generate code, reduce boilerplate, offer suggestions. They can produce in seconds what would take a human minutes or hours.

But they don't solve the problem that actually slows down most projects.

The Illusion of AI-Generated Productivity

Expectations for AI in software development are currently enormous. There's talk of tenfold productivity increases, of democratizing programming, of everyone being able to develop software soon. Some even predict the end of the classic developer profession.

These narratives have one thing in common: they assume that writing code is the limiting factor. If AI takes over writing, so the logic goes, then the bottlenecks disappear.

History Repeats Itself

Anyone who has worked in this industry longer recognizes the pattern. The same promises were made with low-code and no-code platforms: business departments should build their own software, developers would become redundant. Before that, there were RAD (Rapid Application Development) systems, which were supposed to make programming so easy that anyone could do it. Before that, CASE tools, before that, 4GL languages.

Each generation had its technology that was supposed to herald the end of classic software development. None of these predictions came true.

This doesn't mean these technologies were useless. Many have indeed made work easier, accelerated certain tasks, opened up new possibilities. But they haven't solved the fundamental problem: someone has to understand what is to be built. Someone has to bridge the gap between what the customer needs and what the machine can execute.

This translation effort cannot be automated away. AI won't change that either.

Developers will continue to be needed. But what is changing is the profile of requirements. Simply "I can program" is no longer enough. The ability to write code is becoming a commodity if machines can do it too. What remains, what gains value, is the ability to understand: to penetrate problems, to ask the right questions, to translate between business logic and technology.

The future belongs not to those who produce code the fastest, but to those who best understand what code is needed at all. Because an AI can produce code on command. But the command must come from a human who has understood what is to be built. The AI cannot know what the customer needs. It cannot question the implicit assumptions behind a requirement. It cannot recognize that two stakeholders mean different things when they use the same word. All of this remains human work.

What happens when companies skip this work and instead rely on AI-generated productivity? They get more code in less time. Code based on unverified assumptions. Code that translates misunderstandings into machine language. Code that later needs to be expensively corrected or replaced.

The AI delivered what it was supposed to. Only it wasn't what was needed. If I haven't understood what I'm supposed to build, it doesn't help me much to build it faster. I'll just build the wrong thing more efficiently.

The code is generated faster, but it doesn't solve the right problem. The iteration still comes, only now there's more code that needs to be touched again. More code means more complexity, more places to adjust, more potential for errors.

This is not to say that AI tools are useless. On the contrary: for experienced developers who know exactly what they need, they can be valuable tools. They can take over routine tasks and create space for the work that really matters: understanding, questioning, thinking through. But they don't replace this work.

Anyone who believes they can skip the understanding phase with AI will ultimately lose more time, not less. AI accelerates writing. It does not accelerate understanding. And understanding has always been the real bottleneck.

A tool that helps me run faster in the wrong direction is not a productivity gain. It's an acceleration of failure.

Where Time is Really Lost

To get to the point: where exactly do projects lose their time?

The first factor is misunderstandings between the business department and development. A requirement is formulated, development implements it, and during acceptance, it turns out: that wasn't what was meant. Not because someone made a mistake. But because the same words mean different things to different people.

The business expert talks about an "order" and means something specific, with all the implicit rules, exceptions, and boundary conditions they know from their daily work. The developer hears "order" and maps it to their mental model, which is necessarily incomplete. Both believe they are talking about the same thing. Both only realize when code exists that this was not the case.

Ultimately, software does not reflect what the customer wants, and certainly not what they need: software reflects what the developers have understood. Implicit assumptions and interpretations play a role, and they are – naturally – often wrong.

This is not a reproach to those involved. It is the unavoidable consequence of knowledge not simply flowing from one head to another. And then the correction is expensive. Not because coding is expensive, but because understanding has to be caught up while existing code is being adjusted.

The second factor is assumptions that turn out to be false. Every project begins with implicit assumptions about the problem, about the users, about the boundary conditions. Some of these assumptions are correct; some are not. The issue: most of these assumptions are never made explicit. They exist in the minds of those involved, unspoken and unexamined.

And the later a false assumption is recognized, the more expensive the correction becomes. An assumption questioned in the first week costs a conversation. An assumption that comes to light after six months of development requires a rewrite.

It's not uncommon to find that a feature that has been worked on for months was based on a fundamentally false assumption and now needs to be completely rethought or deleted without replacement.

The third factor is iterations that arise from a lack of understanding. I don't mean the good iterations, i.e., the gradual refinement of a product based on real feedback. These iterations are valuable. I mean the unnecessary iterations: building something, realizing it's wrong, rebuilding. Not because the requirements have changed, but because they were not understood correctly from the start.

This type of iteration is expensive. It costs not only the time for the rewrite itself, but also the motivation of the team that sees its work discarded. It costs the trust of stakeholders who wonder why so much time was spent on something that ultimately isn't usable. It costs the opportunity to build something meaningful in that time.

All these factors have one thing in common: they don't arise from coding. They arise before that. Or more precisely, they arise because the "before" was neglected.

Understand First, Then Build

Amazon has a principle called "Working Backwards." The idea: before a product is developed, the team writes the press release for the finished product. Not as a marketing exercise, but as a thinking tool. The press release forces you to think from the result. What exactly should this product solve for whom? Why should anyone care? What problem does it address? And above all, how would a customer describe why this product improves their life?

This sounds like extra effort. It is extra effort at the beginning. But this effort saves many times the amount of time later because it uncovers misunderstandings before they are cast into code. It forces assumptions to be made explicit. It creates a shared understanding before the first line of code is written.

And it prevents months from being invested in a product that no one needs in the end.

The tools for this don't have to be fancy. Intensive conversations with experts, where you really listen, not just to gather requirements but to understand the issue. Workshops where processes are played through together to see where the complexity really lies. Visualizations that show how different stakeholders understand the issue and where their mental models diverge.

There are formalized methods like Event Storming, where business experts and developers collaboratively play through business processes on post-its on the wall. There are strategic patterns from Domain-Driven Design (DDD) that help break down large domains into manageable parts.

These and similar methods can be helpful, but they are not an end in themselves. Good results can be achieved without them if the basic attitude is right. Because the most important thing is not the method but the willingness to invest time in understanding before the first commit is made.

The willingness to ask questions that might sound stupid. The willingness to question assumptions, even if everyone else takes them for granted. The willingness to start slower to reach the goal faster.

"We will start slower so we can finish faster" sounds paradoxical. But it is the reality I observe project after project. Projects that take their time at the beginning reach their goal faster in the end. Projects that start immediately get lost in dead ends.

The Underrated ROI of Understanding

Investing in problem understanding has a return on investment (ROI). It's just harder to measure than the cost of an additional developer or the license fees for a tool.

This ROI is reflected in what doesn't happen: rewrites that become unnecessary. Iterations that are superfluous. Misunderstandings that don't become visible until production. Features that are not built because it becomes clear early on that they wouldn't solve the actual problem. Architectural decisions that don't need to be revised because they were based on a solid understanding from the start.

It is also reflected in what happens: software that actually solves the business problem – the first time, not after three iterations. Teams that understand the "why" and can therefore make better decisions, even without explicit instructions. Architectural decisions that last longer because they are based on a solid understanding of the domain rather than guesswork.

This is hard to sell. No one gets a bonus for mistakes that didn't happen. No one gets promoted because a project went smoothly. The successes of understanding are invisible. They manifest in the absence of problems, not in the presence of solutions.

This makes them easy to overlook and difficult to appreciate. But the math is clear: the time invested in understanding at the beginning pays off many times over in the end. A day invested in a good conversation with business experts can save weeks of rewriting. A workshop that uncovers assumptions can prevent months of development in the wrong direction. A question asked at the beginning can save an entire project from failure.

This doesn't mean spending months on analysis before starting. It means asking the right questions before pouring the answers into code. It means listening before building. It means understanding as part of the work, not as a delay.

The Real Bottleneck Deserves Attention

Software development is not slow because programming is slow. It is slow because understanding takes time – and because we often don't allow for that time. We accelerate the writing of code while the real bottleneck remains unaddressed.

This is not a reproach to individuals. The pressure to deliver results quickly is real. Management's impatience to see visible progress is understandable. Code is visible; understanding is not. It is human to optimize the visible. But it is also short-sighted.

The good news: this bottleneck can be addressed. Not through faster tools, but through the willingness to invest in understanding at the beginning of a project. Through conversations with business experts, through joint thinking about the problem, through uncovering assumptions before they are cast into code.

Through the insight that the fastest way to the goal is not always the one that starts the fastest. This costs time. But it costs less time than the alternative. And it leads to software that earns its name: software that solves problems instead of creating new ones.

The real bottleneck in software development is not coding. It never was. Those who recognize this have taken the first step to reaching their goal faster – by starting slower.

True to the famous motto of the Navy SEALs: "Slow is smooth, and smooth is fast."

the next big thing – Golo Roden

(Golo Roden is the founder and CTO of the native web GmbH. He works on the design and development of web and cloud applications and APIs, with a focus on event-driven and service-based distributed architectures. His guiding principle is that software development is not an end in itself, but must always follow an underlying technical expertise.)

Comments

Loading comments...