Close Menu
clearpathinsight.org
  • AI Studies
  • AI in Biz
  • AI in Tech
  • AI in Health
  • Supply AI
    • Smart Chain
    • Track AI
    • Chain Risk
  • More
    • AI Logistics
    • AI Updates
    • AI Startups

UAE startup Tandia attracts VC investment in AI-driven data monetization product

February 23, 2026

Trust ANC obtains patent for “AI Building” automation system

February 23, 2026

SK Networks invests more in AI startup Upstage

February 23, 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
clearpathinsight.org
Subscribe
  • AI Studies
  • AI in Biz
  • AI in Tech
  • AI in Health
  • Supply AI
    • Smart Chain
    • Track AI
    • Chain Risk
  • More
    • AI Logistics
    • AI Updates
    • AI Startups
clearpathinsight.org
Home»AI in Healthcare»VA watchdog issues warning on medical staff’s use of AI
AI in Healthcare

VA watchdog issues warning on medical staff’s use of AI

January 19, 2026005 Mins Read
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email Telegram WhatsApp
Follow Us
Google News Flipboard
Ai va e1768515970672.jpg
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

A Department of Veterans Affairs watchdog issued an urgent advisory Thursday regarding two AI chat tools currently used by VA health care providers, citing “potential risks to patient safety.”

Doctors don’t use AI to diagnose patients, but the inspector general found that problems could arise in some cases when doctors use artificial intelligence to analyze medical information and update patient records.

The bulletin said systems may be vulnerable to produce misinformation, privacy violations, and bias, and that the systems had been put in place without review by the VA’s own patient safety experts.

When reached for comment on the advisory, Pete Kasperowicz, a spokesperson for the department, said that “VA clinicians use AI only as a support tool, and decisions regarding patient care are always made by appropriate VA personnel.”

Dr. Matthew Miller, former executive director of VA Suicide Prevention, told Task & Purpose the memo was obviously urgent.

“This is an official communication from the (Office of Inspector General) recommending a ‘press stop’ on AI tools under review, based on perceived patient safety concerns,” Miller said. “Specifically, (the Office of Inspector General) discovered information in its review indicating that appropriate process-oriented safeguards may not be in place.”

Dr. David Shulkin, who served as VA secretary in 2017 and 2018 and now does technology consulting for health care companies, said he doesn’t think the report should be seen as an “argument for VA to stop or slow down what it’s doing.”

“I actually think it’s important for veterans to know that VA has the best and most capable technologies to get their care done, so I don’t think you want that to be a reason for VA to move away from artificial intelligence,” he said. “But I think it was an appropriate report to suggest that there is still work to be done here, to ensure that technologies are used appropriately and that we maintain patient safety as the highest priority for veterans.”

Two chatbots, but little monitoring

The notice was published as a Advisory note on preliminary results or PRAM, is only two pages long and does not include specific cases where patient safety was at risk.

According to the memo, VA healthcare providers can now access two AI chat-like systems for use with patient care, VA GPT and Microsoft 365 Copilot Chat. The tools, which are both extended language models or LLM-style discussion systems, are intended to simplify and reduce the time spent documenting patient care. Physicians and medical providers enter patients’ clinical information into the chat tool, which can be copied into veterans’ electronic health records and “may be used to support medical decision-making.”

But the IG cited research that found that AI systems can introduce errors or omit correct data in between 1% and 3% of cases when using patients’ medical data. Federal investigators said that if generative AI tools resulted in incorrect or omitted data, it “could affect diagnostic and treatment decisions,” referring to a May 2025 study on the risks of LLMs used for medical documentation. The study was carried out by researchers from TORTUS AI, an AI support tool used by clinicians, and Guy’s and St Thomas NHS Trust, a trust of the UK’s National Health Service.

In the news this week

According to research, LLMs used to summarize medical documents can cause errors called “hallucinations” – creating information that was not part of the original data or completely omitting information entered by the original user.

“Errors in the generation of clinical documentation can lead to inaccurate recording and reporting of facts. Inaccuracies in the document summarization task can introduce misleading details into conversations or transcribed summaries, potentially delaying diagnoses and causing unnecessary anxiety in patients,” the researchers wrote.

A goal of reducing “burn-out”

The benefit, the researchers found, is that AI could help clinicians who spend much of their time on paperwork, which increases doctors’ “cognitive load” and can “lead to burnout.” Other research noted similar benefits of using AI to give providers back time they can spend with patients as well as using AI tools to translate complex medical language into more accessible instructions for patients.

VA GPT, launched by the agency as a generative AI pilot tool, had nearly 100,000 users in September 2025 and was estimated to save them two to three hours per week.

Through interviews, the IG found that the VA did not work with the agency’s National Center for Patient Safety before implementing these AI chat tools, nor did it have a formal mechanism in place to identify or remediate risks related to the use of generative AI tools.

Inquiries about tasks and goals to the Patient Safety Center were not immediately returned.

Receive the task and goal in your inbox

Sign up for Task & Purpose Today to receive the latest military news every morning.

Register

“The absence of a process precludes a feedback loop and a means of detecting patterns that could improve the safety and quality of AI chat tools used in clinical settings,” the advisory states.

James McCormick, executive director of government affairs for Vietnam Veterans of America, said their group has not received any complaints or concerns about AI resources in the VA from their members.

“However, if issues arise that could put members at risk, we certainly support further analysis of these tools for corrective action and improvements to ensure only the highest quality of care and services for our veterans,” McCormick said.

Tasks and Goals Video

Every week, on Tuesdays and Fridays, our team will offer you analyzes of military technology, tactics and doctrine.

Patty is a senior reporter for Task & Purpose. She reported on the military for five years, embedding with the National Guard during a hurricane and covering legal proceedings at Guantanamo Bay against a suspected al-Qaeda commander.


Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link

Related Posts

AI in healthcare has evolved faster than expected

February 23, 2026

Organizations launch AI in healthcare reporting in Ahmedabad | Ahmedabad News

February 23, 2026

AI in Healthcare: Navigating HIPAA Compliance

February 22, 2026
Add A Comment
Leave A Reply Cancel Reply

Categories
  • AI Applications & Case Studies (70)
  • AI in Business (402)
  • AI in Healthcare (314)
  • AI in Technology (394)
  • AI Logistics (52)
  • AI Research Updates (131)
  • AI Startups & Investments (325)
  • Chain Risk (88)
  • Smart Chain (116)
  • Supply AI (105)
  • Track AI (70)

UAE startup Tandia attracts VC investment in AI-driven data monetization product

February 23, 2026

Trust ANC obtains patent for “AI Building” automation system

February 23, 2026

SK Networks invests more in AI startup Upstage

February 23, 2026

AI in healthcare has evolved faster than expected

February 23, 2026

Subscribe to Updates

Get the latest news from clearpathinsight.

Topics
  • AI Applications & Case Studies (70)
  • AI in Business (402)
  • AI in Healthcare (314)
  • AI in Technology (394)
  • AI Logistics (52)
  • AI Research Updates (131)
  • AI Startups & Investments (325)
  • Chain Risk (88)
  • Smart Chain (116)
  • Supply AI (105)
  • Track AI (70)
Join us

Subscribe to Updates

Get the latest news from clearpathinsight.

We are social
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Reddit
  • Telegram
  • WhatsApp
Facebook X (Twitter) Instagram Pinterest
© 2026 Designed by clearpathinsight

Type above and press Enter to search. Press Esc to cancel.