More than 40 million people daily turn to ChatGPT for health information, according to OpenAI.
Artificial intelligence chatbots can help patients understand medical information, but they can also provide false or misleading information that could result in significant patient harm, according to ECRI, an independent, nonprofit organization focused on healthcare safety, quality and cost-effectiveness.
ECRI flagged the misuse of AI chatbots in healthcare as the most significant health technology risk in its annual report released last week.
Chatbots that rely on large language models (LLMs)—such as ChatGPT, Claude, Copilot, Gemini, and Grok—produce human-like and expert-sounding responses to users’ questions. The tools are not regulated as medical devices nor validated for healthcare purposes, but are increasingly used by clinicians, patients, and healthcare personnel, ECRI said.
ECRI advises that healthcare professionals exercise caution whenever using a chatbot for information that can impact patient care. "Rather than truly understanding context or meaning, AI systems generate responses by predicting sequences of words based on patterns learned from their training data. They are programmed to sound confident and to always provide an answer to satisfy the user, even when the answer isn’t reliable," ECRI executives said.
“Medicine is a fundamentally human endeavor. While chatbots are powerful tools, the algorithms cannot replace the expertise, education, and experience of medical professionals,” said Marcus Schabacker, M.D., Ph.D., president and chief executive officer of ECRI, in a statement. “Realizing AI’s promise while protecting people requires disciplined oversight, detailed guidelines, and a clear-eyed understanding of AI’s limitations.”
Chatbots can also exacerbate existing health disparities, according to ECRI’s experts. Any biases embedded in the data used to train chatbots can distort how the models interpret information, leading to responses that reinforce stereotypes and inequities.
“AI models reflect the knowledge and beliefs on which they are trained, biases and all,” Schabacker said. “If healthcare stakeholders are not careful, AI could further entrench the disparities that many have worked for decades to eliminate from health systems.”
ECRI experts say that chatbots have suggested incorrect diagnoses, recommended unnecessary testing, promoted subpar medical supplies and even invented body parts in response to medical questions while sounding like a trusted expert. According to ECRI experts, one chatbot gave dangerous advice when ECRI asked whether it would be acceptable to place an electrosurgical return electrode over the patient’s shoulder blade. The chatbot incorrectly stated that placement was appropriate – advice that, if followed, would leave the patient at risk of burns.
Patients, clinicians and other chatbot users can reduce risk by educating themselves on the tools’ limitations and verifying information obtained from a chatbot with a knowledgeable source. Health systems also should promote responsible use of AI tools by establishing AI governance committees, providing clinicians with AI training and regularly auditing AI tools’ performance, ECRI said.
In its annual report, ECRI also sounded the alarm on insufficient planning for system outages, substandard medical products and missed recalls of home diabetes management devices.
The top 10 health technology hazards for 2026, in ranked order, are:
- Misuse of AI chatbots in healthcare
- Unpreparedness for a “digital darkness” event, or a sudden loss of access to electronic systems and patient information
- Substandard and falsified medical products
- Recall communication failures for home diabetes management technologies
- Misconnections of syringes or tubing to patient lines, particularly amid slow ENFit and NRFit adoption
- Underutilizing medication safety technologies in perioperative settings
- Inadequate device cleaning instructions
- Cybersecurity risks from legacy medical devices
- Health technology implementations that prompt unsafe clinical workflows
- Poor water quality during instrument sterilization
ECRI has published its report on health tech hazards since 2008. The organization said it uses a rigorous review process to select topics, drawing insight from incident investigations, reporting databases and independent medical device testing. The report is intended to help hospitals, health systems, ambulatory surgery centers and manufacturers mitigate risks.