Now Recruiting Collaborators

The Health Chatbot Users' Guide

Ask smart, stay safe.

Millions of people worldwide use online chatbots for health advice, yet there is a governance blind spot. We are co-designing definitive, evidence-based guidance for the public on how to safely use Large Language Model chatbots for their personal health.

The Governance Blind Spot

General-purpose chatbots powered by Large Language Models (LLMs) like ChatGPT, Claude, CoPilot and Gemini have become de facto health advisors for millions of people around the world. While useful, these tools are not governed as robustly as the rest of healthcare.

We adopt a position of pragmatic neutrality. We neither recommend for nor against the use of these tools. Instead, we acknowledge that millions around the world find them helpful - we seek to mitigate harms while maximising potential utility.

The Risks

Hallucinations, inaccuracies, dangerous medical advice, data privacy breaches, and reinforcement of harmful stereotypes.

The Opportunities

Improved health literacy, increasing access to expert-level health advice, preparation for clinical consultations, and patient empowerment.

Our Objectives

  • 1 Establish consensus on high-risk uses for these tools ("Red Flags" to avoid) and the ways to get the most from these tools ("Guidance Notes" to try out).
  • 2 Assemble these into a coherent, ergonomic resource for the public.
  • 3 Develop a dissemination strategy that reaches patients and the wider public where they are.

A Mixed-Methods Approach

Our research programme balances public co-design with expert rigour.

1

Evidence Synthesis

Rapid review of existing guidance and an expert scoping survey to horizon-scan for technical risks - the "Red Flags".

2

Public Deliberation

Workshops with the public to explore current usage patterns and generate real-world "Guidance Notes".

3

Delphi Study

Sequential surveys with patients and the public, clinicians, and other experts to reach consensus on content.

4

Consensus Meeting

Final ratification of the Health Chatbot Users' Guide wording, format, and tone by public contributors.

The Health Chatbot Users' Guide

Proposed components

Red Lines

A short list of predominantly expert-derived definitive "Do Nots" based on clinical risk and privacy laws.

Eg:

  • Inputting identifiable data (NHS / identification numbers)
  • Using LLMs for emergency triage
  • Calculating medication dosages

Guidance Notes

User-derived actionable tips and generalised examples of utility.

Eg:

  • Prompting to match users' expertise level ("Explain simply")
  • Checking sources and references
  • Refining questions to ask at upcoming appointments

Project Timeline

We are currently in the preparatory phase, aiming for a full public launch of the guidance product by mid-2026.

Current Status

Finalising project plan, patient and public co-design of reserach strategy, and appointing co-investigators.

Dec 2025

Preparation & Setup

Ethical approval granted by the University of Birmingham (ERN_5495-Dec2025).
Project presented to patient/public workshop.

Early 2026

Discovery Phase

Rapid review, scoping survey, and public deliberative exercise.

Mid 2026

Consensus building & Launch

Delphi rounds, consensus meeting, and public launch.

Join the Collaboration

We are looking for diverse voices from around the world to shape this guidance. Whether you are a patient or member of the public, a clinician, or an expert in another area, your input matters.