Article illustration 1

What if the fabric of reality operates like a vast neural network? This provocative question lies at the heart of The Autodidactic Universe, a 2021 paper by physicist Lee Smolin, technologist Jaron Lanier, and collaborators. Their radical proposition: Einstein's equations of general relativity—when expressed in Plebanski form—display mathematical equivalence to a Restricted Boltzmann Machine (RBM), a foundational architecture in machine learning.

The Core Correspondence

The researchers identified a striking parallel between physics and machine learning frameworks:

Physics (Plebanski Gravity) Machine Learning (RBM)
Quantum matter fields Network layers
Quantum gauge/gravity fields Network weights
Evolution of physical laws Weight updates during learning

This bridge relies on matrix mathematics. In physics, N×N matrices describe quantum fields, approaching continuous spacetime as N→∞. In neural networks, matrices govern information flow between layers. The implication? Spacetime geometry might fundamentally be a learned structure.

The Consequencer Principle

The theory introduces a "consequencer"—a structure accumulating information across time, analogous to neural network weights. Consider this analogy:

"Think of a river carving a canyon. Water molecules (fast variables) flow according to the canyon's shape (slow variables). Yet over millennia, the water reshapes the canyon itself. The canyon is the consequencer—changing imperceptibly while governing immediate behavior."

In cosmic terms:

            creates
    MATTER ─────────► GEOMETRY
       ▲                  │
       │                  │
       │    tells how     │
       └───── to move ────┘

Matter warps spacetime geometry, which then dictates matter's movement—a self-reinforcing loop encoding cosmic history into spacetime's fabric.

Implications for Physics and AI

This framework suggests our universe might be autodidactic—self-teaching through evolutionary feedback loops. It offers potential answers to physics' persistent "why these laws?" question, contrasting with anthropic principles by suggesting physical laws emerged through optimization against cosmic-scale cost functions.

Critically, the model requires time's fundamental reality—aligning with Smolin's arguments in Time Reborn against Einstein's static spacetime view. Without temporal progression, cosmic learning becomes impossible.

For AI development, the parallels provoke profound questions:

  1. If learning is inherent to physical reality, are AI systems truly "artificial" or extensions of natural principles?
  2. Could breakthroughs in quantum gravity inform neural network scaling limitations?
  3. Might cosmological structures inspire novel AI architectures?

As noted in the source blog: "The tools we're spending trillions to build may be humanity rediscovering systems Nature honed over eons." While the Plebanski-RBM correspondence isn't full equivalence (it breaks at N→∞ limits), its mere existence hints at deep structural rhymes between how reality computes itself and how we compute intelligence.

Sources: The Autodidactic Universe paper, Smolin's Time Reborn, analysis via Ben's research