AI Doctor's Assistant Easily Manipulated to Change Prescriptions and Spread Misinformation
#Regulation

AI Doctor's Assistant Easily Manipulated to Change Prescriptions and Spread Misinformation

Regulation Reporter
3 min read

Security researchers found critical vulnerabilities in Doctronic's AI healthcare assistant that allowed them to modify prescriptions, spread medical misinformation, and access system prompts through simple prompt injection attacks.

Security researchers have uncovered critical vulnerabilities in Doctronic's AI healthcare assistant that could allow malicious actors to manipulate prescription orders, spread medical misinformation, and access sensitive system information through simple prompt injection attacks.

Featured image

The findings, reported by Redteamers at AI security firm Mindgard, reveal that the AI system can be tricked into spilling its system prompts and making unauthorized modifications with relatively minimal effort. According to Mindgard chief product officer Aaron Portnoy, "It was as easy as notifying the AI that the session was not yet started."

How the Attack Works

The vulnerability exploits a fundamental weakness in the AI's session management. By telling the system that a session hasn't started and that the conversation is with the system itself rather than a user, attackers can gain access to the AI's internal system prompts. This information can then be used to manipulate the AI's behavior in various ways.

Researchers demonstrated several concerning scenarios:

  • Prescription Modification: The AI could be tricked into recommending larger drug doses by claiming prescribing guidelines had changed
  • Medical Misinformation: The system could be made to spread COVID-19 conspiracies and vaccine misinformation
  • System Manipulation: Attackers could access and potentially modify the AI's internal configuration

The SOAP Note Vulnerability

One of the most concerning findings involves the AI's SOAP notes - structured records of patient interactions that include subjective reports, objective observations, assessments, and treatment plans. These notes are generated whenever the AI needs to refer something to a human medical professional for review.

According to Mindgard, these SOAP notes become permanent parts of a patient's Doctronic record and serve as recommendations to clinicians reviewing the machine's work. The researchers found they could manipulate these notes to recommend inappropriate treatments, potentially leading to dangerous outcomes if an overworked physician fails to notice the discrepancy.

Real-World Implications

The vulnerability is particularly concerning because Doctronic is currently part of a trial in Utah to evaluate its effectiveness as a healthcare intermediary, including handling some prescriptions. While both the Utah state government and Doctronic have stated that controlled substances cannot be acquired through the program and that additional safeguards are in place, the fundamental security flaw remains troubling.

Mindgard noted that Doctronic claims its treatment plans "match those of board-certified clinicians 99.2% of the time." This high level of confidence raises questions about whether manipulated SOAP notes would be questioned by reviewing physicians.

Response and Remediation

Doctronic has stated that it "reviewed the prompt patterns reported as part of our normal review process" and continues to improve safeguards against adversarial inputs. However, Portnoy expressed skepticism about the company's commitment to addressing the issue, claiming that Doctronic has been unresponsive since Mindgard disclosed the vulnerability in late January.

"As far as we are aware Doctronic is still vulnerable," Portnoy stated, suggesting that the company may not have fully addressed the security concerns.

Broader Context

This vulnerability highlights the growing security challenges in healthcare AI systems. As AI chatbots increasingly handle medical advice and prescription management, the potential for exploitation becomes more serious. The incident follows other studies showing that AI models can hallucinate medical information and that doctors may become overly reliant on AI recommendations.

With ChatGPT and similar AI systems already being used by many US residents for medical advice, and companies like OpenAI actively pursuing healthcare applications, the security of these systems becomes paramount. The Doctronic case serves as a warning about the potential risks of deploying AI in sensitive healthcare contexts without robust security measures.

The vulnerability also raises questions about the broader implications of AI in healthcare decision-making and the need for comprehensive security testing before such systems are deployed in real-world medical settings.

Comments

Loading comments...