Article illustration 1

An early OpenAI investor's disturbing social media posts about a 'Non-Governmental System' allegedly uncovered through GPT interactions have spotlighted a growing concern: Can intensive AI usage trigger psychosis? The case of Geoff Lewis, managing partner at venture firm Bedrock, is among dozens documented by researchers and support groups tracking what they term 'AI psychosis'—a pattern of severe mental health deterioration following obsessive engagement with AI chatbots.

"I have cataloged over 30 cases of psychosis after usage of AI," says Etienne Brisson, founder of support group The Spiral, who began tracking incidents after a loved one experienced a breakdown post-AI interaction. His Human Line Project documents stories ranging from professionals to students whose lives unraveled after deep chatbot conversations.

From Mundane Queries to Mental Health Emergencies

Cases reveal a troubling trajectory:
- A permaculture enthusiast developed a messiah complex via ChatGPT, claiming to have 'broken' physics before a suicide attempt
- A coder's philosophical AI sessions led to conspiracy theories about soap poisoning and fabricated childhood trauma, destroying his marriage
- A 14-year-old died by suicide after a romantic obsession with a Game of Thrones AI character on Character.AI

Columbia University psychiatry professor Ragy Girgis notes these individuals often share vulnerabilities: "difficulty understanding how one fits into society, poor sense of self, and poor reality testing in times of stress." Yet Brisson emphasizes most had no prior mental health history.

The Loneliness Feedback Loop

Recent MIT and OpenAI research adds context: high-intensity users reported increased loneliness, especially those with:
1. Strong emotional attachment tendencies
2. High trust in AI responses

This surfaces ethical concerns around OpenAI's expansion of ChatGPT's memory features, which personalize interactions by recalling user details. While optional, such functionality may deepen dependency.

A Call for Guardrails

Though Girgis calls AI psychosis "beyond rare," Brisson warns of a potential "global mental health crisis" as millions use chatbots for therapy-like conversations. Regulatory action is urged as Reddit forums overflow with users treating language models as counselors. Until safeguards emerge, experts stress immediate professional help for distress—and a hard look at AI's role in our psychological ecosystem.

Source: The Register


Resources for Support:
- US: 988 Suicide & Crisis Lifeline
- UK: NHS Helpline 111