Article illustration 1

When Google VP of Product Robby Stein recently stated that generative engine optimization (GEO) is fundamentally an extension of SEO, it seemed to resolve a simmering debate in tech circles. But his comments, made on Lenny’s Podcast and covered by Search Engine Journal, reveal a critical blind spot: while AI search starts with familiar retrieval mechanics, it ends with a chaotic recall process that defies traditional optimization. This gap is reshaping how developers and SEO professionals must approach AI-driven visibility—turning it from a ranking game into a battle against entropy.

Google's Core Argument: GEO as SEO 2.0

Stein explained that Google's AI, like Gemini, decomposes user prompts into dozens of sub-queries, searches the web using existing ranking signals (e.g., intent satisfaction, authoritativeness), and synthesizes the top results. As he put it:

"At the end of the day, something’s still searching. It’s not a person, but there are searches happening."
This confirms AI search isn't a replacement for Google's infrastructure but an orchestrator of it—implying that GEO tactics mirror SEO best practices. For engineers, this means optimizing for E-E-A-T (Experience, Expertise, Authoritativeness, Trustworthiness) remains crucial, as the same algorithms govern initial data retrieval.

The Unseen Chaos: When Retrieval Meets Recall

Stein's framework stops short at retrieval, ignoring the volatility of recall—where AI models generate responses by paraphrasing, compressing, or omitting source content. Unlike deterministic search rankings, recall is probabilistic: identical prompts can yield different outputs, erasing brand mentions without warning. For instance, in a documented Gemini test, a brand's inclusion rate plummeted from 62% to 41% over five days despite unchanged prompts and sources. GEO dashboards, which track retrieval success, can't capture this instability, leaving developers blind to real-world exposure risks.

This is where the AIVO Standard intervenes with its PSOS™ (Prompt-Space Occupancy Score), a governance layer that quantifies recall consistency:
- Monitoring: Measures how frequently and prominently entities appear in AI answers across sessions.
- Prescriptive Analysis: Models interventions to stabilize visibility, like adjusting content density or entity placement.
- Verification: Uses statistical methods to distinguish true volatility from random noise, ensuring changes are reproducible.
By treating entropy as a measurable variable, AIVO converts uncertainty into auditable signals, addressing what Stein's model omits.

Why Entropy Demands Standardized Governance

The real-world implications are stark: when AI reshuffles information, optimization alone can't guarantee persistence. Entropy—the inherent randomness in generative sampling—means visibility fluctuates independently of SEO efforts. This shifts the focus from traditional tactics to verifiable governance:

- **Fixed Prompt Libraries**: Standardize inputs to test recall consistency.
- **Version Logging**: Track model updates that might alter output behavior.
- **Entropy-Weighted Normalization**: Adjust scores for inherent unpredictability.
- **Continuous PSOS Monitoring**: Audit exposure in real-time.

As one industry expert noted, this isn't just an evolution of SEO; it's foundational infrastructure for the generative age. Teams building AI-integrated apps must now prioritize tools that validate recall, ensuring their content survives the leap from retrieval to response.

Stein's assertion that "GEO is SEO" clarifies the starting line but not the finish. In generative AI, visibility hinges on probabilistic recall—a domain where governance, not optimization, becomes the ultimate safeguard. For developers, this means embracing standards like AIVO to navigate an era where ranking is just the prelude to the real challenge: staying visible in the chaos.

Sources: Robby Stein on Lenny’s Podcast (October 2025), Roger Montti/Search Engine Journal (2025), AIVO Journal (2025).