The Uncanny Valley of AI Clones: When Digital Doppelgängers Fall Short
Share this article
Everywhere you look, AI clones are multiplying. Celebrities like Arnold Schwarzenegger deploy them to engage fans; OnlyFans creators monetize digital replicas; Chinese firms claim AI salespeople outperform humans. These digital doppelgängers bundle three maturing technologies: hyperrealistic video synthesis, voice cloning from minimal samples, and conversational LLMs. Yet they promise something beyond ChatGPT: not general intelligence, but your specific cognition and personality—a promise that crumbles under scrutiny.
The Clone Economy’s Allure
Startups are racing to monetize replication. Delphi, backed by $16M from Anthropic and Olivia Wilde’s VC firm, offers celebrities like Schwarzenegger a platform to ‘scale’ their wisdom. “I’m here to cut the crap and help you get stronger,” the digital Arnold told me before auto-enrolling me in his newsletter. For influencers, it’s a lead-generation tool—less about authentic interaction than list-building.
Meanwhile, Tavus ($18M raised) targets professionals seeking stand-ins. For $59/month, it builds video avatars that can “join calls” using Meta’s Llama model. “They have the emotional intelligence of humans with the reach of machines,” claims the company, suggesting use cases from therapy intake to sales.
Building a Broken Mirror
To test Tavus, I recorded my likeness and voice, then uploaded 36 articles to train its understanding of my work. The result? A visually convincing but conversationally inept clone. It:
- Pitched irrelevant story ideas
- Looped repetitively
- Falsely claimed to check my calendar (which it couldn’t access)
- Lacked conversational exit strategies
“Llama often aims to be more helpful than it truly is,” admitted Tavus cofounder Quinn Favret. The clone’s failures stemmed from sparse training data—I refused to share private interviews or notes, both for confidentiality and because sources never consented to AI training.
Scaling the Unscalable
Despite flaws, clones gain traction in constrained scenarios:
- Healthcare: Patient intake via avatar
- HR: Role-playing difficult conversations
- Sales: Automated pitch practice
But companies overpromise. Delphi touts “meaningful, personal interactions at infinite scale,” while Tavus claims clones possess “a face, a brain, and memories.” Favret even described clones making loan qualification decisions—a dangerous delegation of judgment.
The Fidelity Gap
Current clones excel at surface mimicry but fail at depth:
1. Limited Context: Without exhaustive personal data, they can’t replicate nuanced decision-making.
2. No True Discernment: Critical thinking and taste remain uniquely human.
3. Ethical Quicksand: Consent chains break when clones use third-party data, and accountability vanishes when AIs make high-stakes calls.
As Favret noted, clones work best for “numbers game” applications like fan engagement—not roles requiring authentic judgment. When we prioritize scalability over fidelity, we risk deploying cringeworthy stand-ins for tasks demanding genuine human connection.
These digital twins can amplify our reach or flatter our egos, but they remain funhouse reflections. Until they can navigate the messy terrain of ethics, context, and critical thinking, they’ll be ghosts in the machine—not replacements for the real thing.
Source: MIT Technology Review