Close Menu
clearpathinsight.org
  • AI Studies
  • AI in Biz
  • AI in Tech
  • AI in Health
  • Supply AI
    • Smart Chain
    • Track AI
    • Chain Risk
  • More
    • AI Logistics
    • AI Updates
    • AI Startups

Enhancing luxury travel experiences through technology

January 24, 2026

Predictions 2026: Evolving data centers for an AI-driven future – IT News Africa

January 24, 2026

Maryland Graduate AI Tool Teaches Case Study Answers

January 24, 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
clearpathinsight.org
Subscribe
  • AI Studies
  • AI in Biz
  • AI in Tech
  • AI in Health
  • Supply AI
    • Smart Chain
    • Track AI
    • Chain Risk
  • More
    • AI Logistics
    • AI Updates
    • AI Startups
clearpathinsight.org
Home»AI in Healthcare»Rapid Adoption of AI in Primary Care Raises Safety Concerns
AI in Healthcare

Rapid Adoption of AI in Primary Care Raises Safety Concerns

December 19, 2025004 Mins Read
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email Telegram WhatsApp
Follow Us
Google News Flipboard
Futuristic techno design on background of supercomputer data center image timofeev vladimir m1 4.jpeg
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

From digital scribes to ChatGPT, artificial intelligence (AI) is quickly making its way into general clinics. A new study from the University of Sydney warns that technology is outpacing security controls, putting patients and health systems at risk.

The study, published in Lancet Primary Caresynthesized global evidence on how AI is used in primary care using data from the US, UK, Australia, several African countries, Latin America, Ireland and other regions. The study found that AI tools such as ChatGPT, AI scribes, and patient-facing apps are increasingly used for clinical queries, documentation, and patient counseling, but most are deployed without thorough evaluation or regulatory oversight.

Primary care forms the backbone of health systems, providing accessible and continuous care. AI can ease pressure on overburdened services, but without safeguards we risk unintended consequences for patient safety and quality of care. »

Associate Professor Liliana Laranjo, study leader, Horizon Fellow at Westmead Applied Research Center

GPs and patients turn to AI, but evidence lags

Primary care is under strain around the world, from workforce shortages to clinician burnout and increasing healthcare complexity, all made worse by the COVID-19 pandemic. AI was presented as a solution, equipped with time-saving tools by summarizing consultations, automating administration and assisting in decision-making.

In the UK, one in five GPs reported using generative AI in their clinical practices in 2024. But the study found that most studies of AI in primary care are based on simulations rather than real trials, leaving critical gaps in effectiveness, safety and fairness.

The number of GPs using generative AI in Australia is not reliably known but is estimated at 40%.

“AI is already in our clinics, but without Australian data on the number of GPs using it or without appropriate monitoring, we are lacking in safety,” Associate Professor Laranjo said.

While AI scribes and ambient listening technologies can reduce cognitive load and improve job satisfaction for GPs, they also carry risks such as automation bias and the loss of important social or biographical details in medical records.

“Our study found that many GPs who use AI scribes do not want to go back to typing. They say it speeds up consultations and allows them to focus on patients, but these tools can miss vital personal details and introduce bias,” Associate Professor Laranjo said.

For patients, symptom checkers and health apps promise convenient, personalized care, but their accuracy often varies and many lack the capacity for independent assessment.

“Generative models like ChatGPT can appear convincing but be factually wrong,” said Associate Professor Laranjo. “They often agree with users even when they are wrong, which is dangerous for patients and difficult for clinicians.”

Equity and environmental risks of AI

Experts warn that while AI promises faster diagnoses and personalized care, it can also widen health gaps if biases creep in. Dermatology tools, for example, often misdiagnose darker skin tones which are typically underrepresented in training datasets.

Conversely, when designed well, researchers say AI can address inequities: An arthritis study doubled the number of Black patients eligible for knee replacements using an algorithm trained on a diverse dataset, allowing it to better predict patient-reported knee pain compared to standard interpretation of doctor’s X-rays.

“Ignoring socio-economic factors and universal design could turn AI in primary care into a setback,” said Associate Professor Laranjo.

The environmental costs are also enormous. The GPT-3 formation, the version of ChatGPT released in 2020, emitted amounts of carbon dioxide equivalent to 188 flights between New York and San Francisco. Data centers now consume around 1% of the world’s electricity, and in Ireland they account for more than 20% of national electricity consumption.

“The environmental footprint of AI poses a challenge,” said Associate Professor Laranjo. “We need sustainable approaches that balance innovation with equity and planetary health.”

Researchers urge governments, clinicians and technology developers to prioritize:

  • robust evaluation and real-world monitoring of AI tools
  • regulatory frameworks that keep pace with innovation
  • education of clinicians and the public to improve AI knowledge
  • bias mitigation strategies to ensure equity in healthcare
  • sustainable practices to reduce the environmental impact of AI.

“AI offers a chance to reinvent primary care, but innovation must not come at the expense of safety or equity,” Associate Professor Laranjo said. “We need partnerships across sectors to ensure AI benefits everyone, not just those who are tech-savvy or well-resourced.”

Source:

Journal reference:

Laranjo, L., et al. (2025). Artificial intelligence in primary care: innovation at the crossroads. Lancet Primary Care. DOI: 10.1016/j.lanprc.2025.100078. https://www.thelancet.com/journals/lanprc/article/PIIS3050-5143(25)00078-0/fulltext

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link

Related Posts

AI “patients” used to help train medical students

January 24, 2026

Why Yann LeCun’s Advanced Machine Intelligence startup is targeting health

January 23, 2026

Amazon launches AI healthcare tool for One Medical members

January 23, 2026
Add A Comment
Leave A Reply Cancel Reply

Categories
  • AI Applications & Case Studies (55)
  • AI in Business (279)
  • AI in Healthcare (251)
  • AI in Technology (266)
  • AI Logistics (47)
  • AI Research Updates (105)
  • AI Startups & Investments (226)
  • Chain Risk (70)
  • Smart Chain (91)
  • Supply AI (74)
  • Track AI (57)

Enhancing luxury travel experiences through technology

January 24, 2026

Predictions 2026: Evolving data centers for an AI-driven future – IT News Africa

January 24, 2026

Maryland Graduate AI Tool Teaches Case Study Answers

January 24, 2026

Emerging AI startup’s stunning $70M funding signals vibrational coding revolution in software development

January 24, 2026

Subscribe to Updates

Get the latest news from clearpathinsight.

Topics
  • AI Applications & Case Studies (55)
  • AI in Business (279)
  • AI in Healthcare (251)
  • AI in Technology (266)
  • AI Logistics (47)
  • AI Research Updates (105)
  • AI Startups & Investments (226)
  • Chain Risk (70)
  • Smart Chain (91)
  • Supply AI (74)
  • Track AI (57)
Join us

Subscribe to Updates

Get the latest news from clearpathinsight.

We are social
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Reddit
  • Telegram
  • WhatsApp
Facebook X (Twitter) Instagram Pinterest
© 2026 Designed by clearpathinsight

Type above and press Enter to search. Press Esc to cancel.