A performance review revelation sparks a framework for calibrating technical decision rigor based on reversibility, coupling, and knowledge accumulation – preventing misallocated brainpower in engineering teams.
{{IMAGE:1}}
My recent performance review contained a revelation that lingered: though skilled at technical evaluations, I’d failed to make the underlying architecture of those decisions legible to others. When advocating for Kafka’s adoption at Mercury, I’d built consensus for the high-level proposal but neglected to map the intricate terrain of tradeoffs and implementation details that would shape our work for years. The organizational momentum to move forward obscured a critical truth: decisions aren’t equal, and misallocating deliberation wastes our best thinking on choices that don’t merit it.
The Hidden Cost of Undifferentiated Rigor
Engineering cultures often default to inconsistent decision-making patterns. Slack threads about linter configurations balloon into calendar invites, while database selections materialize via unreviewed PRs. This occurs not through negligence, but because teams lack a shared model for calibrating effort. We apply uniform seasoning to every dish – sometimes salting crème brûlée, sometimes underspicing stew. The consequence is ambient misallocation: exhaustion from over-engineering reversible choices, and costly migrations from under-engineering irreversible ones.
Concave, Convex, Linear: A Taxonomy of Technical Choices
Drawing from Nassim Taleb’s convexity principles, decisions reveal distinct payoff structures:
Concave (downward curve): Downside risk dwarfs upside gain. Examples: Databases, auth models, message brokers.
- Reversal involves migrations or rewrites
- High coupling across system boundaries
- Knowledge ratchet effect locks in institutional expertise
Convex (upward curve): Upside compounds; downside is bounded. Examples: Linters, internal libraries, monitoring tools (early stage).
- Reversible in days/weeks
- Impacts isolated components
- Failure costs less than prolonged deliberation
Linear: Difference between options < deliberation cost. Examples: Date libraries, YAML parsers.
- Just pick one
Three Diagnostic Questions
Reversibility: "If wrong, what does undoing cost?"
- Kafka reversal: Multi-team migration → Concave
- Logging library swap: Update imports → Convex
Coupling: "How many teams coordinate reversal?"
- Authentication model changes → All teams (Concave)
- HTTP client update → One team (Convex)
Knowledge Ratchet: "Does prolonged use create sunk cost?"
- Monitoring tools gain dashboards/runbooks → Becomes concave over time
The Kafka Case: When Communication Is Part of the Decision
My mistake with Kafka adoption wasn’t flawed technical evaluation – it was underestimating the communication burden inherent to concave commitments. Having lived with the problem space for years, I’d developed a dense mental model of schema evolution, consumer semantics, and failure modes. Yet I documented implementation steps (draft PRs) without translating the underlying rationale. For concave decisions, the map matters as much as the territory. Teams navigating terrain you’ve charted but not shared will inevitably rediscover pitfalls you’ve already cleared.
Calibrating the Deliberation Budget
- Concave: Slow down. Prototype. Write comprehensive ADRs. Consult dissenters. Time invested now prevents years of rework.
- Convex: Move fast. Frame as experiments: "Try Loki for logging; reevaluate in Q3." Document in 3 sentences.
- Linear: Decide instantly. No documentation. "It was fine" suffices.
The power lies in naming. Saying "this is concave" compresses reversal cost, coupling, and ratchet effects into a signal to slow down. Whether using Bezos’ "one-way doors" or this taxonomy, shared language prevents relitigating rigor for every decision. Ultimately, a choice isn’t complete until those living with it see its contours as clearly as you do – a lesson I’m still learning, one concave commitment at a time.
Comments
Please log in or register to join the discussion