![Main image](


alt="Article illustration 1"
loading="lazy">

) When Michael Rabin posted a curious poison puzzle to an electronic bulletin board at Carnegie Mellon University in the late 1980s, he wasn’t shipping a security paper or a protocol spec. But he might as well have been. The puzzle—recently revived via Timothy Chow’s analysis in Mathematics Magazine and spotlighted by The Guardian—reads like a medieval thought experiment. Two rival poison makers, Smith and Jones, are forced into a lethal drink-off by a Queen who wants to learn whose poison is stronger. Both must: 1. Drink the other’s poison. 2. Then drink their own poison. 3. Survive an hour to prove supremacy. Everyone knows the rules: - Any poison kills within an hour. - A stronger poison taken after a weaker one acts as a perfect antidote. - Each maker has multiple distinct poisons of different strengths. - They cannot access each other’s poisons. - Both are incentivized to bring their strongest. On paper, it sounds deterministic: strongest poison wins. In practice, both Smith and Jones die. It’s a beautiful lateral-thinking puzzle. But for anyone building secure systems, multiparty protocols, or competitive AI agents, it’s something more important: a compact parable of strategic reasoning under asymmetric information. > In one page of narrative, Rabin sneaks in dominance, beliefs, incentives, and adversarial protocol design—all without a payoff matrix in sight. --- ## The hidden game theory in the glass The core mechanics are obvious to a puzzle fan but worth decoding in tech terms: - We’re in a two-player, adversarial game. - Each player controls a parameter: the strength of the poison they choose to bring. - There is incomplete information: neither knows the other’s maximum strength. - The Queen’s protocol is fixed and publicly known. - The outcome is determined by how each anticipates the other’s rational behavior. What unfolds is essentially a reasoning cascade: - If you expect your opponent to bring their strongest poison, you might reason you should not bring your strongest, but instead a slightly weaker one designed to turn their vial into your antidote. - But your opponent anticipates that you will anticipate that—and may choose a different strategy. - And so on. The punchline is not just the clever resolution (which The Guardian and Chow present separately), but the structure: rational agents, constrained protocol, common knowledge, adversarial payoffs. This is the same mental machinery behind: - secure key exchange design, - zero-knowledge protocols, - MEV games in blockchains, - auction mechanisms, - AI agents competing under uncertainty. The puzzle is effectively a story about how protocols fail—or become unexpectedly lethal—when participants are too good at reasoning. --- ## From parlor puzzle to protocol design Strip away the medieval set dressing and what you have is a protocol:
Protocol PoisonGame:
  Input: Two parties, S and J, each with a set of poisons {p1..pn} of varying strengths.
  Rules (public):
    - Poison kills in < 1 hr unless followed by stronger poison.
    - Ceremony:
        1. S drinks vial_J
        2. J drinks vial_S
        3. S drinks vial_S
        4. J drinks vial_J
    - No tampering, no external resources.
    - Goal of each: maximize probability of survival.

When security engineers review a protocol like this, they ask:

  • What are the incentives?
  • What strategies are strictly dominated?
  • What happens under perfect rational play?
  • Does the mechanism designer (here, the Queen) actually get what she wants?
Rabin’s setup showcases a classic design sin: assuming that if you specify steps clearly enough, rational parties will behave in a way that reveals the truth you care about—in this case, whose poison is stronger. But in any adversarial context, clarity of steps is not enough. You must ensure that:

  • honest behavior is aligned with self-interest, and
  • there is no equilibrium in which everyone “plays correctly” and the system still collapses.
In the puzzle, the Queen’s mechanism is misaligned. It guarantees that fully rational, self-preserving chemists can drive the system to mutual destruction. That’s not a corner case; that’s the logical endpoint. For protocol designers, this is the lesson: a design that ignores strategic adaptation is already broken.

Why this matters to modern security and AI folks

Here is where the puzzle stops being a curiosity and starts feeling uncomfortably current.

  1. Adversarial security assumptions

    • Just as Smith and Jones can only choose from their own poisons, real-world attackers and defenders are bound by their capabilities—but they will search that space aggressively.
    • Systems that look safe under naive behavior can fail dramatically once you assume strategic, adaptive adversaries.
  2. Common knowledge and exploitability

    • In the puzzle, the rules are public and stable. That very predictability enables lethal strategies.
    • In cryptographic protocols, smart contract systems, and on-chain auctions, full transparency is a double-edged sword: it’s necessary for trust, but it also feeds adversarial modeling and MEV extraction.
  3. Mechanism design in blockchains and marketplaces

    • The Queen’s mistake is painfully similar to early DeFi protocols and NFT auctions that assumed participants would act “as intended,” only to discover sandwich attacks, oracle manipulation, or griefing strategies.
    • Like the drink-off, many systems accidentally reward behavior that undermines the system’s stated goal.
  4. Multi-agent AI systems

    • As we deploy LLM-based agents that negotiate, trade, or manage resources, we are effectively re-running Rabin’s game at scale.
    • If agents are trained or prompted to optimize their own survival/utility under known rules, emergent strategies—collusion, deception, mutual destruction—are features, not bugs.
    • The puzzle is a cautionary tale: if we don’t encode alignment objectives and robust incentives into the environment, "perfectly rational" agents can converge on catastrophically bad equilibria.

Lessons for engineers hidden in a glass of poison

Treat the drink-off as a compact checklist for your next system, protocol, or model deployment:

  • Model rational, adaptive adversaries, not just naive users.
  • Validate that your protocol’s equilibrium behavior matches your intent.
  • Assume public rules will be gamed; design so that gaming reinforces, rather than subverts, the goal.
  • Don’t confuse “clearly specified steps” with “correct incentives.”
  • When multiple self-interested actors operate under shared knowledge, expect non-obvious, sometimes mutually destructive strategies.

In retrospect, Rabin’s puzzle reads like an easter egg from one of the greats to anyone willing to think a layer deeper: a reminder that computation, security, and strategy are inseparable. For developers and security architects, it’s an invitation to treat every new mechanism less like a lab exercise—and more like a room where Smith and Jones are already, quietly, sharpening their vials.