A deep exploration of the belief that computer systems are fundamentally comprehensible, examining how this mindset shapes debugging, learning, and engineering practices while acknowledging its limitations and offering practical guidance for cultivating it.
In the vast landscape of modern software engineering, where systems have grown to encompass layers upon layers of abstraction, it's easy to feel overwhelmed by complexity. We work with web applications that span from silicon transistors to JavaScript VMs, from kernel internals to cloud services, and it's tempting to treat these systems as black boxes—impenetrable mysteries best left unexplored. But what if we approached software with a different mindset? What if we believed, deeply and fundamentally, that computers can be understood?
The Core Belief: Software Can Be Understood
This mindset begins with a radical assertion: that computers and software systems are comprehensible. Not in some abstract theoretical sense, but as a practical, felt belief that any question we might ask about computers has an accessible answer through determined exploration and learning.
The modern software stack is undeniably complex. A single web application might involve dozens of layers: transistors, micro-architecture, CPU instruction sets, operating system kernels, user libraries, compilers, browsers, JavaScript VMs, network services, and more. It's common and largely correct to observe that no single human understands every layer of such a system in complete detail.
Yet this complexity doesn't negate the fundamental comprehensibility of these systems. Each layer is built on deterministic foundations that follow strict rules. We've constructed abstractions upon these foundations, and each abstraction behaves in reproducible, deterministic ways based on the layer beneath it. There is no magic—no point where we encounter unknowable demons making arbitrary decisions.
The Tools of Understanding
When we embrace this mindset, we gain access to powerful tools for learning and problem-solving.
Source Code and Documentation
In modern systems, many layers are open-source and can be understood directly by reading the implementation. If you're working on a Ruby on Rails application with a MySQL database deployed to Linux, every relevant piece of software is open-source and readable. Even when systems aren't open-source, deep documentation often exists. Intel, for instance, provides thousands of pages of processor documentation that exhaustively explore hardware behavior.
Reverse Engineering
When source and documentation are insufficient, systems can still be reverse-engineered through experiment and inspection. Security engineers excel at this skill—Google Project Zero's reverse-engineering of the Haswell microarchitecture's branch predictor, or efforts to document macOS internals, demonstrate that even the most complex systems can be understood through determined effort.
Manifestations of the Mindset
This belief in comprehensibility manifests in several powerful ways in engineering practice.
Understanding Dependencies
Engineers with this mindset habitually learn about the systems they depend on. If you're writing a non-trivial application against a library or framework, you likely have the source code checked out and regularly dig into it when documentation falls short or strange behaviors emerge.
Debugging as Deep Investigation
This habit becomes a superpower for debugging tricky bugs. When you understand your dependencies, you can accurately describe and diagnose issues in terms of the tool's abstractions, producing actionable bug reports or minimal reproductions. The trickiest bugs often span multiple layers or involve leaky abstraction boundaries—problems that practically require someone comfortable moving between layers to track down.
Reduced Reliance on Documentation
Willingness and skill for reading source code reduces your reliance on documentation. When documentation is lacking, you can always go to the source for an authoritative answer. However, this strength can become a weakness—engineers who excel at reading unfamiliar code bases may undervalue documentation and be worse at documenting their own systems.
Security and Performance
Understanding security issues often requires working at multiple levels of abstraction. An attacker isn't bound by documented behavior—they care about how systems actually behave in practice. Similarly, understanding software performance frequently involves understanding multiple layers: efficient Python code requires understanding CPython's implementation, cache-efficient C code requires understanding generated code and hardware.
Building Mental Models
A deeply related habit is trying to understand software by building detailed mental models of underlying systems. Instead of memorizing specific patterns and edge cases, you try to understand core primitives and the rules that generate larger behaviors.
For example, instead of memorizing when to quote in bash scripts, you might learn about bash's expansion phases and the order in which they're applied. This doesn't eliminate the need to learn trivia, but it provides a framework that makes it easier to retain knowledge and apply it to novel problems.
Single-Shot Debugging
With rich mental models of deterministic systems, you can make detailed inferences about program state from small numbers of observations. In extreme cases, developers can root-cause bugs based on a single encounter. This involves backwards reasoning: "If this field is NULL, someone must have set it... the only code that sets that field is here, here, and here..."
This skill is particularly prevalent among kernel engineers, who often must debug crashes using only crash dumps or log traces. Some Solaris kernel engineers even had an explicit goal of 100% root-cause identification from single crash reports, building elaborate crash-reporting technology to support this approach.
The Pitfalls
This mindset, while powerful, comes with significant pitfalls that can lead engineers astray.
The Need to Understand
The belief that systems can be understood can become a need to understand everything you work with. This can be harmful when your goals don't require deep understanding. You might find yourself uncomfortable working in systems without good mental models of underlying layers, making it hard to get started with new tools or complete simple tasks efficiently.
For instance, when trying to set up an AWS Lambda endpoint, you might insist on understanding every component from scratch rather than following a tutorial. This could lead to 30 open documentation tabs, a non-functional endpoint, and wasted time—when a simple tutorial approach might have succeeded in 30 minutes.
Premature Optimization of Approach
Engineers with strong reverse-engineering and code-reading skills might habitually reach for these complex approaches when simpler ones would suffice. You might spend days digging into a dependency to identify a bug, only to discover it was already fixed upstream. Or you might debug a crash in a binary without symbols by poring through disassembly, when a debug build would have made the problem trivial.
The Seduction of Complexity
The mindset can lead to over-engineering solutions or spending disproportionate time on understanding when quick-and-dirty approaches would be more appropriate. The ability to understand complex systems doesn't mean you should always apply that ability to every problem.
Getting Started: Cultivating the Mindset
How do you begin cultivating this mindset, especially if you're not already an experienced engineer?
The practical advice is to cultivate deep curiosity about the systems you work with. Ask questions about how they work, why they work that way, and how they were built. Ask yourself "How would I have built this library?" and identify the gaps in your understanding.
Start with Dependencies
Begin by reading the source of your dependencies. If you write web applications on React, grab a checkout and read through the source. If you work on Django or Rails applications, check out the framework source. Even grab copies of Python or Ruby implementations and look inside.
Your goal isn't to understand everything at once, but to build understanding and confidence that you can always understand more tomorrow. Learning about software systems is a compounding skill—the more systems you've seen, the more patterns you have available for future systems, and the more skills you develop for future problems.
Learn from Role Models
Engineers like Julia Evans exemplify this mindset publicly. She writes and talks not just about what she learns, but how she learns it, conveying curiosity and excitement about discovery. Her posts on kernel development, asking great questions, and becoming a "wizard" provide concrete examples of approaching complex systems with curiosity rather than fear.
The Fundamental Truth
Computers are complex, but they need not be mysteries. Any question we care to ask can, in principle and usually in practice, be answered. Unknown systems can be approached with curiosity and determination, not fear.
This belief—that computers can be understood—is a huge aspect of how experienced engineers approach software engineering. It's not the only valid approach, and it comes with weaknesses, but it provides a powerful framework for learning, debugging, and building software.
The next time you encounter a complex system that seems impenetrable, remember: there is no magic. There are only layers of abstraction, each built on deterministic foundations, each waiting to be understood by someone curious enough to explore them. The question isn't whether you can understand it—it's whether you're willing to try.
Comments
Please log in or register to join the discussion