From digital scribes to ChatGPT, artificial intelligence (AI) is quickly making its way into general clinics. A new study from the University of Sydney warns that technology is outpacing security controls, putting patients and health systems at risk.
The study, published in Lancet Primary Caresynthesized global evidence on how AI is used in primary care using data from the US, UK, Australia, several African countries, Latin America, Ireland and other regions. The study found that AI tools such as ChatGPT, AI scribes, and patient-facing apps are increasingly used for clinical queries, documentation, and patient counseling, but most are deployed without thorough evaluation or regulatory oversight.
Primary care forms the backbone of health systems, providing accessible and continuous care. AI can ease pressure on overburdened services, but without safeguards we risk unintended consequences for patient safety and quality of care. »
Associate Professor Liliana Laranjo, study leader, Horizon Fellow at Westmead Applied Research Center
GPs and patients turn to AI, but evidence lags
Primary care is under strain around the world, from workforce shortages to clinician burnout and increasing healthcare complexity, all made worse by the COVID-19 pandemic. AI was presented as a solution, equipped with time-saving tools by summarizing consultations, automating administration and assisting in decision-making.
In the UK, one in five GPs reported using generative AI in their clinical practices in 2024. But the study found that most studies of AI in primary care are based on simulations rather than real trials, leaving critical gaps in effectiveness, safety and fairness.
The number of GPs using generative AI in Australia is not reliably known but is estimated at 40%.
“AI is already in our clinics, but without Australian data on the number of GPs using it or without appropriate monitoring, we are lacking in safety,” Associate Professor Laranjo said.
While AI scribes and ambient listening technologies can reduce cognitive load and improve job satisfaction for GPs, they also carry risks such as automation bias and the loss of important social or biographical details in medical records.
“Our study found that many GPs who use AI scribes do not want to go back to typing. They say it speeds up consultations and allows them to focus on patients, but these tools can miss vital personal details and introduce bias,” Associate Professor Laranjo said.
For patients, symptom checkers and health apps promise convenient, personalized care, but their accuracy often varies and many lack the capacity for independent assessment.
“Generative models like ChatGPT can appear convincing but be factually wrong,” said Associate Professor Laranjo. “They often agree with users even when they are wrong, which is dangerous for patients and difficult for clinicians.”
Equity and environmental risks of AI
Experts warn that while AI promises faster diagnoses and personalized care, it can also widen health gaps if biases creep in. Dermatology tools, for example, often misdiagnose darker skin tones which are typically underrepresented in training datasets.
Conversely, when designed well, researchers say AI can address inequities: An arthritis study doubled the number of Black patients eligible for knee replacements using an algorithm trained on a diverse dataset, allowing it to better predict patient-reported knee pain compared to standard interpretation of doctor’s X-rays.
“Ignoring socio-economic factors and universal design could turn AI in primary care into a setback,” said Associate Professor Laranjo.
The environmental costs are also enormous. The GPT-3 formation, the version of ChatGPT released in 2020, emitted amounts of carbon dioxide equivalent to 188 flights between New York and San Francisco. Data centers now consume around 1% of the world’s electricity, and in Ireland they account for more than 20% of national electricity consumption.
“The environmental footprint of AI poses a challenge,” said Associate Professor Laranjo. “We need sustainable approaches that balance innovation with equity and planetary health.”
Researchers urge governments, clinicians and technology developers to prioritize:
- robust evaluation and real-world monitoring of AI tools
- regulatory frameworks that keep pace with innovation
- education of clinicians and the public to improve AI knowledge
- bias mitigation strategies to ensure equity in healthcare
- sustainable practices to reduce the environmental impact of AI.
“AI offers a chance to reinvent primary care, but innovation must not come at the expense of safety or equity,” Associate Professor Laranjo said. “We need partnerships across sectors to ensure AI benefits everyone, not just those who are tech-savvy or well-resourced.”
Source:
Journal reference:
Laranjo, L., et al. (2025). Artificial intelligence in primary care: innovation at the crossroads. Lancet Primary Care. DOI: 10.1016/j.lanprc.2025.100078. https://www.thelancet.com/journals/lanprc/article/PIIS3050-5143(25)00078-0/fulltext
