Beyond URLs: How the Rise of Machine Consumers is Rewriting SEO
Share this article
For decades, the URL reigned supreme as the atomic unit of the web—a cornerstone of SEO, analytics, and user navigation. But as Jono Alderson argues, that era is over. Today, over half of all web traffic comes from non-human sources: search engine crawlers, bots, and AI agents that treat your meticulously designed pages not as experiences, but as envelopes to rip open and mine for data. This seismic shift demands a fundamental rethink of how we build and optimize digital content.
The URL is Dead; Long Live the Assertion
Google’s evolution tells the story. Where early search indexed and ranked entire URLs, modern systems like passage ranking and the Knowledge Graph slice pages into discrete assertions—facts represented as subject-predicate-object triples (e.g., "Product X → has price → $99"). LLMs and AI agents take this further, stripping out styling and scripts to encode content into vector spaces for probabilistic retrieval. As Alderson notes:
"To these systems, a page at a URL is just a container—a source of assertions to extract, evaluate, and connect. Our page-first mental model no longer matches the environment we’re optimizing for."
This isn’t just about Google. When ChatGPT or Perplexity surface answers, they often rely on search engine results as trust proxies, inheriting biases and vulnerabilities. Without the decades of anti-spam infrastructure Google built, these nascent systems are easily polluted by manipulated data—making clarity and consistency non-negotiable.
Trust is Graph-Shaped, Not Page-Shaped
Machines don’t evaluate claims in isolation. They weigh them within a web of connections:
- Explicit graphs (like Google’s Knowledge Graph) use links and schema markup to corroborate facts.
- Implicit models in LLMs rely on statistical patterns across training data, where consistent assertions in high-quality contexts gain authority.
# Example of machine trust evaluation:
- Is the claim reinforced by trusted nodes (e.g., Wikipedia, reputable publishers)?
- Is it consistent across marketplaces, reviews, and third-party sites?
- Does it connect cleanly to other concepts in the model?
This creates a hostile landscape. Competitors and bad actors can weaponize the "hostile corpus"—polluting SERPs with contradictory claims to weaken your authority. Your defense? Ensure every assertion about your brand, from pricing to expertise, is unambiguous and mirrored across trusted platforms. Authority now emerges from network coherence, not keyword density.
Engineering for the Machine-Mediated Web
Optimizing for this new reality doesn’t mean abandoning human-centric design. It means dual-layer creation: compelling narratives for people, and machine-optimized structures for agents. Key tactics include:
1. Design pages as assertion bundles: Use semantic HTML for hierarchy, and repeat key facts (e.g., "Price: $99") in predictable patterns.
2. Publish machine-friendly endpoints: Offer APIs and structured data feeds (JSON/XML) to bypass scraping inefficiencies.
3. Reinforce claims ecosystem-wide: Synchronize product details on marketplaces, YouTube descriptions, and partner sites to build graph resilience.
4. Monitor the battlefield: Track competitor distortions and counter misinformation with high-authority sources.
Google’s old mantra to "write for humans" is incomplete. As machines mediate discovery, your content must be legible to both audiences. The winners will be those who stop chasing page rankings and start architecting a web of meaning—where every assertion is a node in an unignorable, trustworthy graph.
The URL was the web’s foundation, but we’re now building on its ruins. In this assertion-first future, your relevance hinges on how machines encode, connect, and ultimately believe your story.
Source: Jono Alderson, The web isn’t URL-shaped anymore