Article illustration 1

You propose a solution to your team's problem. Everyone nods. The HIPPO (Highest Paid Person's Opinion) effect has struck again—but this apparent consensus might be masking catastrophic flaws. Welcome to the invisible battlefield of authority gradients, where hierarchy and confidence often override truth, especially in technology teams.

When Silence Crashes Planes (and Code)

Captain: "It's spooled. Real cold, real cold."
Co-pilot: "God, look at that thing. That don’t seem right, does it? Uh, that’s not right."
Captain: "Yes it is, there’s eighty."
Co-pilot: "Naw, I don’t think that’s right. Ah, maybe it is."

This hesitant exchange from Flight 90—which later crashed into the Potomac River—exposes how tentative language fails against entrenched authority. Aviation adopted Crew Resource Management (CRM) to combat this, mandating that all crew members speak up. Tech teams face similar risks: Google's Project Aristotle revealed psychological safety—where team members feel safe taking risks—as the #1 trait of high-performing groups. Yet in tech, we ignore social dynamics at our peril.

Why Developers Are Trapped in the Gradient

Tech environments breed authority gradient dangers:

  • The pattern-matching trap: Seniors rely on historical solutions, missing subtle context shifts.
  • The novice's curse: Juniors spot fresh insights but hesitate to challenge "experts."
  • The Spock fallacy: We pretend logic alone wins arguments while ignoring tone, hierarchy, and power dynamics.
Article illustration 3

Caption: An image that breaks up the text. And I can’t think of a witty caption.

AI: The Ultimate Confidence Machine

Enter large language models—authority amplifiers that never doubt, hesitate, or admit ignorance. They combine:

  • Babble effect: Volume of output mistaken for validity
  • HIPPO 2.0: High-cost AI tools granted undue deference
  • Automation bias: Over-trust in algorithmic outputs

This creates a perfect storm: an "authority" without accountability, drowning dissent with synthetic certainty. When juniors use AI to bolster arguments, it can democratize insight. But when teams treat AI outputs as gospel, critical thinking evaporates.

Rebalancing the Cockpit: Humans First, AI Second

To harness AI without surrendering to it:

  1. Anchor in human context: Use humans to define problems before AI generates solutions.
  2. Treat outputs as hypotheses: Demand verification rituals—"Show me the tests for this approach."
  3. Remember AI has no skin in the game: Unlike your team, algorithms don’t face consequences.

As aviation proved, flattening authority gradients saves lives. In tech, it saves products, teams, and innovation itself. AI should serve as copilot—not captain—in our collective cockpit.

Source: Adapted from "Authority Gradients" by Jeff, originally published on JoT Substack.