
Follow ZDNET: Add us as your favorite source on Google.
ZDNET Key Takeaways
- ChatGPT Health and Claude for Healthcare both debuted last week.
- Google’s MedGemma 1.5 model was introduced shortly after.
- They all point to the growing presence of AI in healthcare.
Three of the world’s largest AI labs kicked off the new year with the launch of products aimed at healthcare.
Their functions vary, but they all point in the same direction: a world in which patients, payers and providers increasingly rely on artificial intelligence to accelerate certain critical operations and democratize access to key services. AI-based healthcare is still in its infancy, and the lack of federal oversight This means there is very little accountability if the technology acts in unexpected and dangerous ways. But the three new products give us a glimpse of what is likely to become the new normal.
Also: What the country’s toughest AI regulations will change in 2026, legal experts say
Here’s a look at each new tool, how they work, and who can currently access them.
ChatGPT for Health and Claude for Health
On January 7, OpenAI introduced ChatGPT Health, a chatbot feature that allows users to download health records from apps like Apple Health and Function and receive personalized medical advice.
In a blog postOpenAI wrote that the new health feature “was developed in close collaboration with doctors around the world to provide clear and useful health information.” It is currently being tested by a small group of early adopters and will be widely available on the web and iOS in the coming weeks. according to Axios. You can also register via a waiting list to access it.
Also: 7 Ways Health Tech Promises to Improve Your Life in 2026
Four days later, Anthropic launched a similar feature, Claude for Healthcare, which allows Pro and Max subscribers in the United States to upload personal health records through the integrated system. connectors to health applications.
“Once connected, Claude can summarize users’ medical histories, explain test results in simple language, detect trends in fitness and health metrics, and prepare questions for appointments,” Anthropic writes in its report. announcement. “The goal is to make patients’ conversations with doctors more productive and help users stay well-informed about their health.”
Claude for Healthcare also offers connectors and SKILLS for payers and providers. Doctors, for example, can use it to speed up the process — called prior authorization — of verifying with an insurer that a given treatment or medication will be covered by a patient’s plan. Healthcare organizations can now access Claude for Healthcare via Claude for Business and the Claude development platform.
OpenAI and Anthropic said in their announcements that users’ health data will not be used to train new models and that the new tools are not intended to replace direct in-person treatment. “Healthcare is designed to support, not replace, medical care,” OpenAI wrote in its blog.
Also: 40 million people worldwide use ChatGPT for their healthcare – but is it safe?
ChatGPT Health and Claude for Healthcare are similar enough to be considered direct competitors at a time when healthcare, relative to other industries, has been quickly adopt AI tools.
On the user side, a large number of people use popular AI chatbots like ChatGPT and that of Microsoft Co-pilot for advice regarding health insurance, whether they have a particular set of symptoms to worry about, and other very personal health-related topics.
MedGemma 1.5
On January 13, Google announced the release of MedGemma 1.5the last of his MedGemma foundation model family designed to help developers create applications capable of analyzing medical text and images.
Also: Using Google AI Overview for health advice? It’s “really dangerous”, according to an investigation
Unlike ChatGPT Health and Claude for Healthcare, MedGemma 1.5 is not a standalone consumer-facing tool; Yet it can still be seen as part of the AI industry’s race to strengthen its presence in the healthcare sector.
MedGemma is a freely accessible model available via Cuddly face And AI Summit.
Concerns
As developers readily admit, AI chatbots are still very prone to hallucination — making up lies and presenting them as facts. This obviously poses serious risks when someone chats with ChatGPT or Claude about their personal health concerns, which is why OpenAI and Anthropic have issued cautions that their new features should only be used as a complement to, not a replacement for, actual healthcare providers.
Data privacy is another common – and justified – concern when it comes to sharing personal health records with AI systems. OpenAI and Anthropic appear to have anticipated this concern; Both companies emphasize that their new features are designed to maximize privacy.
Also: Are AI health coach subscriptions a scam? My verdict after testing the Fitbits for a month
Claude for Healthcare users, for example, can control what health data is shared with the chatbot. Additionally, the sharing feature is disabled by default.
OpenAI added in its blog post that while ChatGPT’s new Health feature can reference relevant details from non-health-related discussions, such as a recent move, health-related conversations will always remain in this dedicated space. In other words, the chatbot won’t be able to rely on these conversations when you bring up an unrelated topic. You can also view and edit chatbot information memories in the Health tab or in the Personalization section in Settings.
