Now Recruiting Collaborators

The Health Chatbot Users' Guide

Ask smart, stay safe.

Millions of people worldwide use online AI chatbots for health advice. We are co-creating evidence-based guidance to help patients and the public use these tools safely.

New Publication

Building The Health Chatbot Users’ Guide

Khair DO, Kale AU, Agbakoba R, ... Alderman JE. Nat. Health (2026). https://doi.org/10.1038/s44360-026-00074-5

View Article

The Governance Blind Spot

AI chatbots (often referred to as 'Large Language Models', or LLMs) include tools like ChatGPT, Google Gemini, Microsoft Copilot, and Claude. They have become go-to health advisors for millions of people worldwide. But unlike other parts of the healthcare system, there are few safety checks to ensure that they are working properly.

We are taking a neutral stance. We won't tell people to use or avoid these tools. Instead, we know millions of people already find them helpful. Our goal is to reduce the risks, and help people share how they get the most out of them safely.

The Risks

Made-up information (hallucinations), dangerous medical advice, data privacy leaks, and reinforcing harmful stereotypes.

The Opportunities

Better understanding of health issues, easier access to expert-level information, helping patients prepare for doctor's appointments, and feeling more in control of your health.

Our Objectives

  • 1 Agree on the riskiest ways to use these tools ("Red Flags" to avoid) and the best ways to use them ("Guidance Notes" to try).
  • 2 Turn these into a clear, easy-to-use guide for the public.
  • 3 Share this guide widely in an accessible format, so it reaches people wherever they look for health information.

How We Are Building The Guide

Our research brings together everyday people and technical experts to build this guide collaboratively.

1

Reviewing the Evidence

Looking at current research and asking experts to identify the biggest technical risks (the "Red Flags").

2

Public Workshops

Talking with everyday users to understand how they use chatbots and gathering their practical tips (the "Guidance Notes").

3

Building Agreement

A Delphi study, where we send surveys to patients, doctors, and tech experts to reach a shared agreement on the final advice.

4

Final Review

A final meeting where public contributors approve the guide's wording, design, and tone.

The Health Chatbot Users' Guide

What the guide will include

Red Lines

A short list of things to avoid, mostly suggested by experts, focusing on medical safety and data privacy.

Examples:

  • Entering personal details (like NHS numbers or names)
  • Using chatbots for medical emergencies
  • Asking chatbots to calculate medication doses

Guidance Notes

Practical tips from everyday users on how to get helpful and safe results.

Examples:

  • Asking the chatbot to "explain this in plain language"
  • Making sure to check facts, and asking for trusted sources
  • Refining questions to ask at upcoming appointments

Project Timeline

We are aiming to launch the final guide to the public by mid-2026.

Current Status

Launching our first scoping survey, and building our network of public and professional contributors.

Dec 2025 to Feb 2026

Preparation & Setup

Ethical approval granted by the University of Birmingham.

Project shared with our first patient workshop.

Formed our steering group with members of the public and professionals from health and technology.

March 2026

Discovery Phase

Reviewing evidence, surveying the public and experts, and running public workshops.

Mid 2026

Agreement & Launch

Running delphi study surveys, holding the final review meeting, and launching the guide.

Join the Collaboration

We are looking for diverse voices from around the world to shape this guidance. Whether you are a patient or member of the public, a clinician, or an expert in another area, your input matters.