By deploying and studying the impacts of AI tools across different healthcare providers and patient subgroups, the AI Assurance Lab strives to reintegrate humans into artificial intelligence.

This can cause a lot of concern, said Dr David McManuspresident and professor of medicine at UMass Chan School of Medicine in Worcester. AI can now step in to aid in diagnosis, predicting health outcomes, hospital triage, and more. These tools must be free of bias and discriminatory practices, McManus said. Enter UMass Chan AI Insurance Laba new initiative launched in April to test the ethical use of AI in healthcare. By deploying and studying the impacts of AI tools across different healthcare providers and patient subgroups, the AI Assurance Lab strives to reintegrate humans into artificial intelligence. “We wanted to make sure that in trying to do something good, we weren’t further exacerbating health disparities by training a model who might, without realizing it, be biased in some way,” McManus said. McManus and Dr Adrian ZaiUMass research IT director Chan co-leads the Assurance Lab, where healthcare companies test their AI tools for real-world effectiveness and fairness. “Even if the overall accuracy seems strong, that doesn’t mean (the AI) is performing fairly,” Zai said. “Ethical concerns are generally created by unequal impacts, not just statistical differences. »
AI stress testing
“One of the most common misconceptions about AI in healthcare is that good performance of a model throughout the development process… guarantees good performance in the real world,” Zai said.

The lab works with companies to containerize their AI, integrating new technology into the lab’s testing facilities. By analyzing AI tools, the Assurance Lab looks for security, reliability and the potential for errors that could harm patients. The laboratory performs stress tests, evaluating performance and capabilities under hypothetical and extreme conditions. “We evaluate whether clinicians actually understand and appropriately contextualize the results,” Zai said. If the results produced by an AI tool are not transparent or obvious to providers, it creates ethical risk, particularly when AI is used to inform care decision-making, he said. The lab often uses real-world data from hospitals in the UMass Memorial Health system in central Massachusetts. In one case, Zai and McManus built a database with millions of digitized ECG results from racially diverse patients treated at UMass, to test the performance of an AI tool designed to predict cardiovascular complications in pregnant women.
Unintended consequences
The AI Assurance Lab conducts testing through UMass Chan Interprofessional Center for Experiential Learning and Simulation laboratory. iCELS is a center focused on simulating real-world scenarios to train students, professionals and technologies. AI is impacting how providers deliver care, how patients receive care and how the two interact, said Dr Melissa Fischerexecutive director of iCELS.

“Anytime you make a change to a complex system like a healthcare system, it’s best to look at it from multiple angles…and ask yourself, ‘What might be the expected and unintended outcomes?’ “, Fischer said. iCELS is testing a new AI application designed to evaluate whether less invasive technology can be used in predictive testing. The lab used simulations with volunteers to analyze how potential patients interact with the technology, how providers interact with the information, and whether AI gives providers enough information to make clinical decisions. Volunteers browse the app to see how easily they can access, download, understand and use the technology. iCELS only tests AI tools on volunteers, not real patients, although Fischer hopes to one day have patients interact with providers around AI.

“How do we talk about this as part of a patient care experience, and how does that impact the clinician-patient experience? How does that impact potential trust between the clinician and patient?” she said.
Unsexy AI
When McManus and Zai launched the Assurance Lab, they thought they would focus on AI in clinical tools to aid diagnoses and better prepare less-trained providers. As the lab finishes its first nine months of operation, that goal has changed.

“AI applications in health should be expanded to include things that are really not that sexy,” McManus said. These unsexy applications include serving as a scribe during appointments or more administrative tasks, like deciding the order in which a fleet of vehicles should be dispatched or scheduling patients for colonoscopies. “A lot of the future that I see of AI in health would be potentially starting with familiarizing people with AI, with some of the things that are not as risky or as difficult or as threatening,” McManus said. McManus reads news articles claiming AI will replace doctors, but he said that concept won’t face legal scrutiny anytime soon. In a life-and-death industry like healthcare, patients won’t be the first to let AI oversee their care. “Today, you have to involve a human process. No AI directly makes decisions without human oversight,” Zai said. “It’s going to be like this for a little while until there’s a much better security blanket.”
Mica Kanner-Mascolo is a staff writer at Worcester Business Journal, primarily covering the healthcare, manufacturing and higher education industries.
