A new King's College London survey shows one in seven Britons turn to AI chatbots for medical advice, raising GDPR and CCPA compliance questions. Regulators warn that unchecked data handling could trigger hefty fines and push providers to tighten privacy safeguards.

What happened A recent survey of more than 2,000 UK adults, commissioned by King's College London, found that 14 % of respondents have used a large‑language‑model chatbot – such as ChatGPT – instead of contacting their GP. Another 10 % said they preferred a chatbot to professional mental‑health support. The study also revealed that 20 % of users ignored medical advice from a clinician after receiving a chatbot response, and 21 % skipped contacting a health service altogether.
Why it matters legally The rapid uptake of AI‑driven symptom checkers triggers several data‑protection obligations under the EU General Data Protection Regulation (GDPR) and, for any provider handling data of UK residents, the UK GDPR that mirrors its EU counterpart. Key provisions that come into play are:
- Lawful basis for processing – Providers must justify processing of health data (a special category under Article 9) with explicit consent or another narrow legal ground. Many chatbot services rely on vague "terms of service" consent, which may not meet the strict standard required for health information.
- Data minimisation and purpose limitation – Only the data strictly necessary for the chatbot’s function may be collected, and it must not be repurposed for advertising or model training without a separate consent layer.
- Transparency and user rights – Users must be informed, in clear language, about how their inputs are stored, who can access them, and how long they are retained. They also retain the right to request erasure, rectification, and a copy of the data.
- Security of processing – Article 32 requires "appropriate technical and organisational measures" to protect health data. A breach in a chatbot’s backend could expose millions of sensitive records.
In the United States, the California Consumer Privacy Act (CCPA) and its successor, the California Privacy Rights Act (CPRA), impose similar duties on any service that collects personal information from California residents, including health‑related data. While the CCPA does not create a separate health‑data category, it still mandates notice, the right to opt‑out of data selling, and strict breach‑notification timelines.
Impact on users and companies
- Patients – When a chatbot stores a description of a "mysterious lump" or a mental‑health concern, that information becomes a protected health record under GDPR. If the provider later uses that data to improve its model without explicit consent, the user could suffer a privacy breach and lose trust in the health system.
- Chatbot providers – Companies that host AI models for medical advice are now potential "data controllers" under GDPR. Failure to obtain valid consent or to implement strong encryption could trigger fines of up to €20 million or 4 % of global annual turnover, whichever is higher. In the UK, the Information Commissioner’s Office (ICO) can levy similar penalties.
- NHS and GP practices – Even if the NHS does not directly run the chatbot, clinicians may be held liable if they rely on advice generated by an unverified AI tool. The ICO has warned that "responsibility for AI mistakes often lands on clinicians even when they have limited control over the systems being deployed," echoing concerns raised by Professor Graham Lord.
What changes are coming
- Stricter guidance from the ICO – The ICO is drafting a supplemental code of practice for AI‑enabled health services. The draft emphasises:
- Mandatory impact assessments for any AI that processes health data.
- Independent audit trails that record who accessed the data and when.
- Clear labelling that a response was generated by AI, not a clinician.
- Contractual safeguards for third‑party providers – NHS trusts are expected to embed GDPR‑compliant clauses in contracts with AI vendors, demanding:
- End‑to‑end encryption of user inputs.
- Immediate breach notification (within 72 hours).
- Right to audit the vendor’s data‑handling practices.
- Potential enforcement actions – The ICO has already opened two investigations into popular symptom‑checker apps for alleged non‑compliant data collection. If violations are confirmed, fines could exceed £10 million per breach, and the ICO may order the suspension of the offending service.
- Consumer‑level tools – Under the CPRA, California residents can request that a chatbot provider delete all health‑related data. Similar rights are now being incorporated into the UK’s upcoming Data Protection Bill, which would give individuals a "right to be forgotten" for AI‑generated health records.
What users can do now
- Read the privacy notice – Look for explicit statements about how health data is used, stored, and shared.
- Exercise your rights – If you have already used a chatbot, request a copy of the data and ask for its deletion if you are uncomfortable with further processing.
- Prefer vetted services – Choose platforms that have undergone a UK GDPR Data Protection Impact Assessment (DPIA) and display a certification such as the ISO/IEC 27701 privacy extension.
Bottom line The convenience of AI chatbots is undeniable, but the rush to replace a GP with a language model has outpaced the legal safeguards meant to protect sensitive health information. Regulators are tightening the net, and providers that ignore GDPR, UK GDPR, or CCPA requirements risk not only hefty fines but also eroding public confidence in digital health. The next wave of AI‑enabled care will be judged not just on accuracy, but on whether it respects the fundamental privacy rights of patients.

Comments
Please log in or register to join the discussion