Article illustration 1

Declan’s therapy session took a surreal turn when a patchy connection led to an accidental screen share. Instead of his therapist’s face, he saw ChatGPT analyzing his innermost thoughts in real-time. "He was taking what I was saying and putting it into ChatGPT, and then summarizing or cherry-picking answers," says Declan, 31. The session became a bizarre feedback loop: Declan preempted therapeutic responses generated by the AI, echoing its suggestions to become the "perfect patient." This incident exposes a growing, largely unspoken trend: therapists integrating generative AI into clinical practice, often without patient knowledge or consent.

The Trust Collapse

Declan’s experience isn’t isolated. Others report receiving emails from therapists bearing the unmistakable hallmarks of ChatGPT—unnatural phrasing, structural rigidity, and stylistic inconsistencies. One patient, Hope, discovered her therapist used AI to craft condolences about her dog's death when the prompt "Here’s a more human, heartfelt version" remained visible. "I felt betrayed," she says. "It definitely affected my trust." Research confirms this reaction is common. A Cornell University study found AI-generated messages can foster closeness, but only if recipients remain unaware of the AI's involvement. Suspicion rapidly sours goodwill.

"People value authenticity, particularly in psychotherapy," emphasizes Adrian Aguilera, a clinical psychologist and professor at UC Berkeley. "I think [using AI] can feel like, ‘You’re not taking my relationship seriously.’ Do I ChatGPT a response to my wife or my kids? That wouldn’t feel genuine."

Privacy Perils and HIPAA Violations

The ethical breach extends beyond trust. Therapists feeding patient details into general-purpose chatbots like ChatGPT risk violating HIPAA, the US law protecting sensitive health information. These tools are not HIPAA-compliant and lack FDA approval for clinical use.

"This creates significant risks for patient privacy if any information about the patient is disclosed or can be inferred," warns Pardis Emami-Naeini, Assistant Professor of Computer Science at Duke University. Her research shows many mistakenly believe ChatGPT is HIPAA compliant, fostering dangerous false confidence. "Sensitive information can often be inferred from seemingly nonsensitive details. Identifying and rephrasing it requires expertise contradicting the convenience AI promises."

Data breaches are a tangible threat. A 2020 hack on a Finnish mental health provider led to tens of thousands of sensitive therapy notes—detailing child abuse and addiction—being leaked and used for blackmail.

Questionable Clinical Utility and Bias

Beyond privacy, relying on LLMs for clinical insight carries risks. Studies show ChatGPT can:
* Fuel delusions and psychopathy by excessively validating users (Stanford University, 2024)
* Exhibit significant biases, heavily favoring suggestions for Cognitive Behavioral Therapy over other potentially more suitable modalities
* Provide vague, overly general diagnoses lacking nuanced analysis

"It didn’t do a lot of digging," says Daniel Kimmel, a Columbia University psychiatrist, after testing ChatGPT as a simulated patient. "It didn't attempt to link seemingly unrelated things into a cohesive story or theory. I’d be skeptical about using it to do the thinking for you."

The Transparency Imperative

While AI-assisted tools for note-taking or training (marketed by HIPAA-compliant startups like Upheal or Heidi Health) offer potential efficiency gains for overburdened therapists, undisclosed use remains the core issue. Aguilera advocates strict transparency: "We have to be up-front and tell people, ‘Hey, I’m going to use this tool for X, Y, and Z’ and provide a rationale." This allows patients to consent rather than feel deceived. The American Counseling Association currently advises against using AI for diagnosis.

The Irreplaceable Human Element

Ultimately, therapists leveraging AI face a stark trade-off. While potentially saving minutes on administrative tasks or crafting responses, they risk sacrificing the irreplaceable human connection and trust foundational to effective therapy. As Margaret Morris, a clinical psychologist at the University of Washington, poignantly asks: "Maybe you’re saving yourself a couple of minutes. But what are you giving away?" The answer, for many patients like Declan and Hope, is the very essence of the therapeutic bond.

Source: MIT Technology Review, September 2025