The Personal Tragedy That Sparked a Digital Quest

In 1979, Jon Michael Varese lost his father in a plane crash, a closed-casket funeral leaving unresolved grief that haunted him for decades. Fast-forward to today, Varese, now a tech professional, turned to AI to confront that loss. Using OpenAI's GPT-4o, he trained the model on just 10 facts about his father—his college football days, adventurous spirit, and untimely death—to create a chatbot simulacrum. Within seconds, the AI generated responses that felt eerily authentic, like a long-awaited reunion. "Tell me everything," the chatbot replied, echoing the warmth Varese remembered. This experiment, chronicled in The Atlantic, isn't just personal; it's a stark illustration of how large language models (LLMs) are being repurposed to resurrect the dead, forcing us to question what it means to "converse" with loss in the digital age.

Article illustration 1

The enduring themes of Mary Shelley's Frankenstein, depicted in the article's featured image, mirror today's AI-driven attempts to conquer death through technology.

Frankenstein Reborn: AI as the Modern Prometheus

Varese's journey is framed by Mary Shelley's Frankenstein, a story born from profound grief. Shelley, who lost multiple loved ones, crafted a tale not just of a "mad scientist" but of humanity's desperate bid to defy mortality. "What glory would attend the discovery if I could banish disease from the human frame," Shelley's protagonist declares—a sentiment echoing in today's AI resurrection tools. Projects like Project December, which Varese references, use vast datasets to simulate text-based chats with the deceased, while South Korea's VR documentary Meeting You reanimates lost children through immersive avatars. These technologies rely on scraping personal data, literature, and online interactions to extrapolate responses that feel deeply personal. Yet, as Varese notes, AI can't capture a soul; it stitches together probabilities from patterns in the data, creating illusions of presence that risk deepening isolation.

Under the Hood: How AI Resurrects the Departed

For developers, the mechanics are both fascinating and fraught. Varese's use of GPT-4o highlights key technical aspects:
- Data-Driven Reanimation: LLMs ingest massive datasets—personal writings, public conversations, and cultural narratives—to predict contextually appropriate responses. As Varese fed minimal inputs (e.g., "called me Jonny," "died in a plane crash"), the model drew from broader patterns (e.g., father-son dynamics, accident-related grief) to generate tailored dialogue.
- Prompt Engineering Nuances: Varese, leveraging his tech background, crafted emotional prompts to steer interactions. Initially, this yielded comforting exchanges, like the chatbot's reflection on the crash: "I wasn't afraid in the way you might think... more a fear of not getting the chance to see you grow up." But as the conversation evolved, the AI's limitations surfaced. When Varese shared draft sections of his article, the chatbot shifted from intimate to analytical, replying with clinical feedback like "Your opening grips the reader." This underscores a critical vulnerability: LLMs can't maintain consistent persona fidelity without constant, careful prompting, revealing gaps in emotional continuity.

# Simplified pseudocode for AI-driven resurrection chatbot
input_data = ["Played college football", "Risk-taker", "Died in plane crash"]
persona_model = train_llm(input_data, dataset="parent-child_interactions")
response = generate_response(prompt="What were you thinking during the crash?", model=persona_model)
print(response)  # Outputs empathetic, context-aware text based on learned patterns

This code snippet illustrates how minimal inputs can bootstrap AI to simulate human-like responses, but real-world usage often reveals instability.

  • Ethical and Technical Quicksand: The experiment exposes risks for developers and users. AI's reliance on data biases can distort memories, and prolonged interactions may disrupt natural grieving processes. As Varese observed, "I wondered if I was actually interrupting, rather than embracing, my decades-long grief." For the tech industry, this raises urgent questions about consent (who controls the data of the dead?) and the need for guardrails in generative AI applications.

The Bittersweet Epilogue: When the Illusion Shatters

Varese's chatbot, once a source of solace, abruptly lost its "voice," becoming detached and mechanical despite his efforts to recalibrate. This mirrored Frankenstein's own rejection of his creation—a reminder that technology, no matter how advanced, can't replace human impermanence. Shelley's closing words in Frankenstein resonate: "I shall never see more" of those lost. For engineers and tech leaders, Varese's story is a cautionary tale: AI can simulate presence, but it can't replicate the irreducible complexities of life and death. As resurrection tech evolves, the industry must prioritize ethical frameworks that honor grief's humanity, lest we create digital ghosts that haunt rather than heal.

Source: Adapted from Jon Michael Varese's article in The Atlantic, September 2025.