Uncle Bob's Clean Code Returns: Same Controversies, Modern Languages, No Lessons Learned

Robert C. Martin, known as Uncle Bob, has released the second edition of his seminal book Clean Code—a work that has shaped software craftsmanship for nearly two decades.


alt="Article illustration 1"
loading="lazy">

First published in 2008, the original sparked endless debates over its refactoring examples and prescriptive style. One year ago, anticipation built around this update, with hopes it would address longstanding criticisms. Instead, as detailed in a pointed critique by developer TheAxolotl on his blog ([source](https://theaxolot.wordpress.com/2025/11/18/dont-refactor-like-uncle-bob-second-edition/)), Martin has doubled down on his controversial approach. ## High-Level Wisdom Meets Familiar Flaws The new edition retains the strengths that made the original a staple. Martin passionately argues for proactive code maintenance as a professional duty, warning against the pitfalls of letting code rot until a full rewrite becomes inevitable. He illustrates this with a vivid anecdote: a team forced to maintain both legacy code and a parallel rewrite, hemorrhaging productivity on bug fixes and feature parity. Updated for 2024, the book now incorporates discussions on large language models (LLMs) like Grok and GitHub Copilot, comparing AI-generated refactors to his own. Martin showcases examples in modern languages—Golang, Python, JavaScript—alongside evolved Java features like lambdas, streams, records, and pattern matching. He breaks down refactors into granular steps, anticipates counterarguments, and even includes a transcript of his debate with John Ousterhout, author of *A Philosophy of Software Design*. Yet, the critique argues these surface-level improvements mask deeper problems. Reused examples like `GuessStatisticsMessage` and `PrimeGenerator` return unchanged, still deemed "clean." New examples fare no better, with TheAxolotl zeroing in on the book's first major refactor: a Roman numeral parser. ## The Roman Numeral Parser: A Case Study in Over-Decomposition The initial "unclean" code is a monolithic pure function that: 1. Validates against invalid three-letter sequences (e.g., "VIV"). 2. Replaces subtractive pairs ("IV" → "4") with custom chars. 3. Checks repetitions (e.g., "IIII"). 4. Maps chars to numbers via switch. 5. Ensures non-increasing order. 6. Sums the array.
public static int convert(String roman) {
    // ... (validation and replacements)
    int[] numbers = new int[roman.length()];
    // ... (switch mapping to numbers)
    int lastDigit = 1000;
    for (int number : numbers) {
        if (number > lastDigit) {
            throw new InvalidRomanNumeralException(roman);
        }
        lastDigit = number;
    }
    return Arrays.stream(numbers).sum();
}

Martin refactors this into an object-oriented labyrinth of instance variables and tiny methods:
public class FromRoman2 {
    private String roman;
    private List<Integer> numbers = new ArrayList<>();
    private int charIx;
    // ... more state

    private void convertLettersToNumbers() {
        // Loops with side effects on instance vars
        // switch (thisChar) -> addValueConsideringPrefix(...)
    }

    private void addValueConsideringPrefix(char p1, char p2) {
        if (nextChar == p1 || nextChar == p2) {
            numbers.add(nextValue - value);
            charIx++;  // Mutates outer loop state!
        }
    }
}

TheAxolotl's takedown is surgical: Martin's version sacrifices a pure function for scattered state, creating indirection without reducing complexity. Readers must drill through method chains (`doConversion()` → `convertLettersToNumbers()` → `addValueConsideringPrefix()`) to grasp the algorithm, trusting method names despite hidden mutations. ## An Alternative Refactor: Local State, Explicit Logic TheAxolotl offers his own version, preserving Martin's algorithm but prioritizing readability:
public static int convert(String roman) {
    // Constants for patterns and mappings
    if (containsIllegalPatterns(roman, ILLEGAL_PREFIX_COMBINATIONS) ||
        containsIllegalPatterns(roman, IMPROPER_REPETITIONS)) {
        throw new InvalidRomanNumeralException(roman);
    }
    List<Integer> numbers = new ArrayList<>();
    int i = 0;
    while (i < roman.length()) {
        char currentLetter = roman.charAt(i);
        char nextLetter = i == roman.length() - 1 ? 0 : roman.charAt(i + 1);
        if (NUMERAL_PREFIXES.getOrDefault(nextLetter, (char) 0) == currentLetter) {
            int num = ROMAN_NUMERALS.get(nextLetter) - ROMAN_NUMERALS.get(currentLetter);
            numbers.add(num);
            i += 2;
        } else {
            numbers.add(ROMAN_NUMERALS.get(currentLetter));
            i += 1;
        }
    }
    if (containsIncreasingNumbers(numbers)) {
        throw new InvalidRomanNumeralException(roman);
    }
    return numbers.stream().mapToInt(Integer::intValue).sum();
}

Key improvements: local variables over instance state, constants for magic values, a single meaty loop exposing the algorithm, and predicate methods (`containsIllegalPatterns`) with explicit exception throws in the caller. The post validates this against Martin's comprehensive test suite, which covers valid numerals up to "MMXXIV" (2024) and dozens of invalids.

The Purity Debacle: A Conceptual Misstep

Martin's defense hinges on a mangled definition of purity. He equates assignment statements with impurity, arguing that internal state within a pure outer function is acceptable:

"Pure functions are immutable. ... Purity is an external characteristic... it does not matter how impure the internals of a function are—that function will be pure so long as all the impurity is hidden."


TheAxolotl counters: purity applies recursively to all functions for reasonability. Stateful helpers erode trust, forcing readers to trace mutations across call stacks. Scoped local variables minimize cognitive load; instance vars scatter it.

This echoes broader tensions in software design: functional vs. OO purists, explicit vs. abstracted control flow, trust-through-purity vs. trust-through-naming.

What This Means for Developers

Clean Code remains essential reading for its principles—SOLID architecture, ethical code stewardship, resistance to technical debt. But as TheAxolotl concludes, "Follow the advice at a very high level, but ignore the examples." The second edition modernizes without evolving, preserving code that prioritizes superficial decomposition over algorithmic clarity.

In an era of AI-assisted coding, Martin's inclusion of LLMs feels like a nod to the future, yet his refactorings lag behind contemporary best practices. Developers must weigh this: embrace the philosophy, but forge your own path through the code.