Close Menu
clearpathinsight.org
  • AI Studies
  • AI in Biz
  • AI in Tech
  • AI in Health
  • Supply AI
    • Smart Chain
    • Track AI
    • Chain Risk
  • More
    • AI Logistics
    • AI Updates
    • AI Startups

Forbes Health Summit 2024 | Bringing the power of data and AI to healthcare

December 12, 2024

After filming, UnitedHealthcare faces scrutiny for using AI in treatment approval – Computerworld

December 11, 2024

How UnitedHealthcare and other insurers are using AI to deny claims

December 11, 2024
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
clearpathinsight.org
Subscribe
  • AI Studies
  • AI in Biz
  • AI in Tech
  • AI in Health
  • Supply AI
    • Smart Chain
    • Track AI
    • Chain Risk
  • More
    • AI Logistics
    • AI Updates
    • AI Startups
clearpathinsight.org
Home»AI in Healthcare»8 Experts on Harnessing AI in Healthcare
AI in Healthcare

8 Experts on Harnessing AI in Healthcare

December 7, 2024018 Mins Read
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email Telegram WhatsApp
Follow Us
Google News Flipboard
Z3m6ly9kaxzlc2l0zs1zdg9yywdll2rpdmvpbwfnzs9bsv9hcnrpy2xlx2hlywrlcl9ldmvudc5wbmc.webp.webp
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

This audio is automatically generated. Please let us know if you have back.

Editor’s Note: This article includes information from Healthcare Dive’s recent live event, “AI and the Future of Healthcare.” You can watch the full event here.

Healthcare organizations face a number of obstacles when adopting artificial intelligence tools. Providers must address patient concerns, while payers and other life sciences companies struggle to balance promises of effectiveness with ethical concerns such as bias.

They are doing all this on the promise that AI will automate repetitive tasks, reduce medical spending and waste, allow clinicians to spend more time with patients, and transform the healthcare industry.

But more than two years after generative AI became popular with the general public thanks to ChatGPT, healthcare is still playing catch-up in regulating, testing and implementing these tools, experts said in a study. panel organized by Healthcare Dive on November 19.

Here’s advice from eight healthcare experts on what organizations should consider when implementing AI and how to develop standards and regulations.

How Vendors Can Control AI Tools

The first step providers should take when deciding whether or not to integrate an AI tool is to evaluate the clinical context in which it will be used. Not every tool is suitable for every clinic task, according to Sonya Makhni, medical director of the Mayo Clinic Platform. .

“An AI algorithm that might be really good and appropriate for my clinical environment might not be so appropriate for another and vice versa,” Makhni said. “Health systems need to understand…what to look for, and they need to understand their patient population so they can make an informed decision for themselves.” »

Although vendors must evaluate AI tools, they face obstacles in analyzing them due to their complexity, as the algorithms and models can be difficult to understand, she added.

“Our workforce is already under stress,” Makhni said. “We can’t really expect everyone to go and do a master’s degree in data science or artificial intelligence to be able to interpret solutions.”

To help solve this problem, providers should turn to public and private consortiums, like the nonprofit Coalition for AI in Health – for guiding principles when evaluating AI tools.

“We learned a lot about the types of principles that solutions should adhere to,” Makhni said. So security, fairness, usefulness, transparency, explainability, privacy, you know, all of these things that we should use as a lens when we’re looking at AI solutions.

Address patient concerns

Once providers decide to integrate AI tools, they face another potential stumbling block: their patients.

As AI tools become more popular, patients have expressed reservations about the technology used in the doctor’s office. Last year, 60% of American adults surveyed told Pew Research that they would be uncomfortable if their supplier relied on AI for their medical care.

To make patients more comfortable, Maulin Shah, chief medical information officer at Providence Health System, said clinicians should emphasize that right now, AI plays a purely supportive role.

“AI is actually, in many ways, a better way to support and provide decision support to your customers. doc(tor), so they don’t miss anything or can suggest things,” Shah said.

Even though AI tools have only just become popular with the general public, patients can feel better knowing that AI has been around in the medical field for a long time.

Aarti Ravikumar, chief medical information officer at Atlantic Health System, highlighted transformative tools such as the artificial pancreas or the hybrid closed-loop insulin pump, which have become a “game changer” for insulin-dependent patients .

“All this work is done using artificial intelligence algorithms” » said Ravikumar. “So we have AI tools built into our medical devices or our electronic medical record, and have for a long time.”

“None of these tools remove the clinician from that interaction or medical decision-making,” Ravikumar said. “If we get to the point where it’s going to automate decisions and remove the clinician from that decision-making process, then I think we’ll definitely have to explain a lot more.”

Fight against errors and prejudices

Every organization will be forced to solve problems when integrating AI models. But the stakes are higher in healthcare, where biases and hallucinations, or when AI tools produce false or misleading information, can lead to disruptions in patient care.

Providers aren’t the only healthcare organizations grappling with bias. Payers have faced backlash following the use of AI tools refusing medical care, and tech companies have been accused of create tools that worsen existing health care disparities.

According to Aashima Gupta, global director of healthcare at Google Cloud, it is essential for generative AI companies to keep humans informed, including feedback from users, such as experts, nurses or clinicians.

“To me, this feedback will make AI generation more effective for a given use case,” Gupta said.

AI companies should also test their models thoroughly. At Google, dedicated teams attempt to break AI tools through trickery, such as trying to provoke an incorrect answer to a question. Robust development and keeping humans in the loop go “hand in hand” with controlling errors, she added.

But while organizations need to be wary of errors and bias, AI tools could represent an opportunity to try to mitigate bias in healthcare, Jess said. Lamb, partner at the consulting firm McKinsey.

“There’s a ton of bias in the healthcare system before we introduce AI, right? So we have to remember that we’re not starting from a perfect point,” Lamb said. “The idea that we can actually use AI and use some of this deliberate surveillance to improve some of the current position that we find ourselves in when it comes to bias in health care, I think, is actually a huge opportunity.”

“We always talk about the risk of negative bias when it comes to the use of AI, but I actually think there’s a pretty significant benefit here as well in mitigating some of the existing biases that we see in the system,” she added.

Develop regulations and standards for health AI

As healthcare organizations decide whether to implement AI, the federal government and private consortiums are grappling with how to regulate it.

Although the federal government has made incremental progress in its attempt to regulate these tools, notably by establishment of rulesstandards for the industry are still in their infancy.

AI adoption has been rapid compared to the times it became more common two years ago, which amplified pressure on the government to pass regulations, said Micky Tripathi, assistant secretary for technology policy and acting director of AI at HHS.

Going forward, partnerships between the government and the private sector will be key, Tripathi said.

“There’s a maturation process that’s going to take place here that I think will largely be a public and private affair,” he said.

Tripathi also questions when regulation will help force the private sector to adopt its own standards and certifications for tools. In another area of ​​the industry, the government provides standards for electronic health record companies to apply for voluntary certifications.

“For example, what would cause Providence Health organizations around the world to feel obligated to use or obtain some sort of certification, or some sort of endorsement… from an organization that provides certain services to validate AI models?” he said. “Right now, it would just be a pure cost, either to a developer developing these solutions or to a vendor implementing them. »

While consortia can provide high-level frameworks for AI, organizations also need open standards to help address AI clinical use cases at the field level, said Sara Vaezy, director of strategy and digital manager at Providence.

“We need open standards similar to all the progress made in interoperability,” Vaezy said.

“The challenge today is that the consortia are far from where the work is happening for us, and we need to close that gap quickly. And I think one of the ways to do that is to create open standards,” she added.

Training providers must support the creation of standards, according to Reid Blackman, founder and CEO of consultancy Virtue. It can also help fill regulatory or governance gaps related to AI.

“You can do a lot to make the doctor, nurse, etc. aware of these risks,” he added.

“Training is a critical part, I don’t want to say guardrails, but it’s a critical part of making sure things don’t go wrong,” Blackman said.

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link

Related Posts

Forbes Health Summit 2024 | Bringing the power of data and AI to healthcare

December 12, 2024

After filming, UnitedHealthcare faces scrutiny for using AI in treatment approval – Computerworld

December 11, 2024

How UnitedHealthcare and other insurers are using AI to deny claims

December 11, 2024
Add A Comment
Leave A Reply Cancel Reply

Categories
  • AI Applications & Case Studies (26)
  • AI in Business (70)
  • AI in Healthcare (64)
  • AI in Technology (73)
  • AI Logistics (24)
  • AI Research Updates (35)
  • AI Startups & Investments (58)
  • Chain Risk (31)
  • Smart Chain (32)
  • Supply AI (21)
  • Track AI (33)

Forbes Health Summit 2024 | Bringing the power of data and AI to healthcare

December 12, 2024

After filming, UnitedHealthcare faces scrutiny for using AI in treatment approval – Computerworld

December 11, 2024

How UnitedHealthcare and other insurers are using AI to deny claims

December 11, 2024

Webinar to explain how an AI-powered contact center improves the patient experience

December 11, 2024

Subscribe to Updates

Get the latest news from clearpathinsight.

Topics
  • AI Applications & Case Studies (26)
  • AI in Business (70)
  • AI in Healthcare (64)
  • AI in Technology (73)
  • AI Logistics (24)
  • AI Research Updates (35)
  • AI Startups & Investments (58)
  • Chain Risk (31)
  • Smart Chain (32)
  • Supply AI (21)
  • Track AI (33)
Join us

Subscribe to Updates

Get the latest news from clearpathinsight.

We are social
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Reddit
  • Telegram
  • WhatsApp
Facebook X (Twitter) Instagram Pinterest
© 2025 Designed by clearpathinsight

Type above and press Enter to search. Press Esc to cancel.