Millions of people worldwide use online AI chatbots for health advice. We are co-creating evidence-based guidance to help patients and the public use these tools safely.
Khair DO, Kale AU, Agbakoba R, ... Alderman JE. Nat. Health (2026).
https://doi.org/10.1038/s44360-026-00074-5
AI chatbots (often referred to as 'Large Language Models', or LLMs) include tools like ChatGPT, Google Gemini, Microsoft Copilot, and Claude. They have become go-to health advisors for millions of people worldwide. But unlike other parts of the healthcare system, there are few safety checks to ensure that they are working properly.
We are taking a neutral stance. We won't tell people to use or avoid these tools. Instead, we know millions of people already find them helpful. Our goal is to reduce the risks, and help people share how they get the most out of them safely.
Made-up information (hallucinations), dangerous medical advice, data privacy leaks, and reinforcing harmful stereotypes.
Better understanding of health issues, easier access to expert-level information, helping patients prepare for doctor's appointments, and feeling more in control of your health.
Our research brings together everyday people and technical experts to build this guide collaboratively.
Looking at current research and asking experts to identify the biggest technical risks (the "Red Flags").
Talking with everyday users to understand how they use chatbots and gathering their practical tips (the "Guidance Notes").
A Delphi study, where we send surveys to patients, doctors, and tech experts to reach a shared agreement on the final advice.
A final meeting where public contributors approve the guide's wording, design, and tone.
What the guide will include
A short list of things to avoid, mostly suggested by experts, focusing on medical safety and data privacy.
Examples:
Practical tips from everyday users on how to get helpful and safe results.
Examples:
We are aiming to launch the final guide to the public by mid-2026.
Launching our first scoping survey, and building our network of public and professional contributors.
Ethical approval granted by the University of Birmingham.
Project shared with our first patient workshop.
Formed our steering group with members of the public and professionals from health and technology.
Reviewing evidence, surveying the public and experts, and running public workshops.
Running delphi study surveys, holding the final review meeting, and launching the guide.
We are looking for diverse voices from around the world to shape this guidance. Whether you are a patient or member of the public, a clinician, or an expert in another area, your input matters.
We have recorded your interest in the project. The team will be in touch as we enter the next phase of recruitment.