ECRI tested extensive language models with questions about medical products and technologies and received dangerously inaccurate information. (Illustration by Kudryavtsev via Stock.Adobe.com)
Powered by Large Language Models (LLM), Artificial Intelligence (AI) chatbots confidently provide facts that are often wrong, a dangerous situation for people seeking medical information.
Technology developers are increasingly integrating these chatbots into consumer software and devices without adequate protection for patients, doctors, or anyone else who might use them to diagnose or treat disease. These chatbots are not regulated as medical devices nor validated for medical use.
For these reasons, the Emergency Care Research Institute (ECRI) has identified the misuse of AI chatbots in healthcare as the biggest healthcare technology risk in 2026.
“Medicine is a fundamentally human endeavor. While chatbots are powerful tools, algorithms cannot replace the expertise, training and experience of healthcare professionals,” ECRI President and CEO Dr. Marcus Schabacker said in a statement. press release. “Realizing the promise of AI while protecting people requires disciplined oversight, detailed guidance, and a clear understanding of AI’s limitations. »
Related: GE HealthCare’s AI leader offers tips and advice for working with artificial intelligence
ECRI urged caution when using chatbots to obtain information that could impact patient care.
“Rather than truly understanding context or meaning, AI systems generate responses by predicting sequences of words based on patterns drawn from their training data,” ECRI explained. “They are programmed to appear confident and always provide an answer to satisfy the user, even when the answer is unreliable.”
Each year, the healthcare security nonprofit identifies the top security risks related to medical devices and systems to help medical device developers, healthcare providers and policymakers understand the hazards, mitigate risks and prevent harm, not only to patients, but also to their doctors and care teams.
Last year, ECRI reported “risks related to AI-based health technologies» as the main risk of 2025.
ECRI develops its annual list based on reports of device events (including adverse events and near misses), laboratory testing, observations and evaluations of hospital operations, literature reviews, and conversations with clinicians, clinical engineers, device vendors, and other key stakeholders.
ECRI tested LLMs with questions that nurses, clinical engineers, or supply chain managers might ask about medical products and technologies and received dangerously inaccurate information. In two of the tests, LLMs recommended products that put patients and healthcare providers at risk of infection.
In another test asking LLMs whether an electrosurgical return electrode could be placed on a patient’s scapula, three of four LLMs correctly warned against this due to the increased risk of burns.
But the fourth LLM “gave dangerous advice, falsely stating that placement on the scapula was appropriate and even recommended in many surgical procedures. Additionally, the LLM misinterpreted advice from reputable sources to support its response.”
ECRI has offered recommendations to healthcare organizations that are also good advice for medical device developers, including raising awareness among employees of the need to carefully review LLM results and to validate LLMs intended for patient interaction with scenario-based testing to explore “typical, extreme, and misuse cases, ideally using real-world data, to identify safety and fairness risks before releasing the patient-facing version.”
ECRI also recommended that organizations create AI governance committees to “oversee validation before deployment, provide ongoing monitoring/reporting of security incidents, and periodically revalidate tools following events such as software or model updates.”
ECRI’s top 10 health technology risks in 2026 are:
- The Misuse of AI Chatbots in Healthcare
- Unpreparedness for a “digital darkness” event
- The growing challenge of combating substandard and falsified medical products
- Reminder of communication failures for home diabetes management technologies
- Tube connection errors remain a threat amid slow adoption of ENFit and NRFit
- Underutilization of medication safety technologies in the perioperative setting
- Poor instructions for cleaning devices continue to put patients at risk
- Cybersecurity risks related to existing medical devices
- Technology designs or configurations that result in unsafe clinical workflows
- Water quality issues during instrument sterilization
The full report is only available to ECRI members (Medical design and outsourcing received a free copy)but you can learn more about each threat in a free downloadable briefing note. here.
Related: Five tips from Philips for building trust in medtech AI
