OpenAI's Sora App Turns Deepfakes Into Entertainment—And Raises Alarming Questions
Share this article
OpenAI has ventured into uncharted territory with the release of Sora, an iOS app that transforms AI-generated deepfakes into scrollable entertainment. Powered by the new Sora 2 video model, the platform features a TikTok-like "For You" feed populated entirely by AI clips—complete with synthetic voices and animations. For the first time, OpenAI pairs AI-generated visuals with audio, creating eerily cohesive videos from simple text prompts. Access is currently invite-only, but its implications are immediate and far-reaching.
The Deepfake Playground
During setup, users create a "digital likeness" by recording a short video of themselves speaking numbers and rotating their head. This biometric data trains the model to replicate their appearance and voice. Users can then insert these avatars—or those of approved contacts—into any scenario via text prompts. Want a video of your colleague arguing about a deadline? Or yourself as a superhero? Sora generates it in seconds, scripting dialogue and motion automatically.
"You are about to enter a creative world of AI-generated content. Some videos may depict people you recognize, but the actions and events shown are not real," warns the app's sign-up advisory.
OpenAI CEO Sam Altman acknowledged the risks in a blog post, stating the team worked intensively on "character consistency" while implementing safeguards against bullying and non-consensual content. Users control likeness permissions (private, approved contacts, or public), and receive notifications when their avatar is used. Yet WIRED's testing revealed critical gaps:
- The app blocked requests for bikini or "buff anime" avatars as "suggestive," but generated a clip of the reporter "smoking 10 fat blunts" at their desk.
- It refused videos simulating self-harm or celebrities like Taylor Swift (blocking even "tswift impersonator" prompts), yet created flawless parodies of South Park characters.
- Altman’s own likeness appeared repeatedly in viral clips, including one where he "steals a GPU from Target."
The Uncanny Valley of Fun
Despite occasional glitches—like a South Park scene where Cartman’s voice emanates from Altman’s mouth—the outputs often cross into unsettling realism. One WIRED staffer sent a deepfake of themselves morphing into a long-haired woman to their partner, who initially believed it was a filter. This accessibility is Sora's double-edged sword: lowering barriers to hyper-personalized synthetic media while normalizing its use as casual entertainment.
The app arrives weeks after Meta’s AI video feed "Vibes," but Sora’s emotional resonance—fueled by recognizable faces—proves more addictive and disturbing. It echoes early viral gimmicks like "Elf Yourself," except the stakes are exponentially higher. As synthetic media blends into social scrolling, OpenAI’s guardrails face a formidable adversary: human creativity in exploiting loopholes. While banning overt harm, the app tacitly endorses absurd, misleading, or borderline-defamatory content—so long as it’s "fun."
Sora represents a pivotal moment: not just in AI video synthesis, but in society’s relationship with digital identity. When deepfakes become as effortless as sending a meme, consent frameworks and detection tools lag far behind. OpenAI may frame this as playful innovation, but the blurred line between reality and simulation has never been thinner—or more dangerously entertaining.