AI is rapidly taking hold in healthcare, bringing potential benefits but also possible pitfalls such as biases that lead to unequal care and burnout among doctors and other healthcare workers. It is still unclear how this should be regulated in the United States.
In September, the hospital accreditation body Joint Commission and the Coalition for AI in Health issued recommendations for the implementation of artificial intelligence in medical care, with the burden of compliance falling largely on individual institutions.
I. Glenn Cohendirector of the faculty of Harvard Law SchoolIt is Petrie-Flom Center for Health Law, Biotechnology and Bioethicsand his colleagues suggested in the Journal of the American Medical Association that the guidelines are a good start, but that changes to ease likely regulatory and financial burdens – particularly on smaller hospital systems – are needed.
In this edited conversation, Cohen, the James A. Attwood and Leslie Williams Professor of Law, discussed the difficulty of balancing thoughtful regulation while avoiding unnecessary barriers to game-changing innovation in a context of rapid adoption.
Is it clear that AI in healthcare needs to be regulated?
Any time medical AI handles something medium to high risk, you need regulation: internal self-regulation or external government regulation. So far, this has mostly happened in-house, and there are differences in how each hospital system validates, reviews, and monitors healthcare AI.
When done on a hospital-by-hospital basis like this, the costs of this type of assessment and monitoring can be significant, meaning some hospitals can do it and others can’t. In contrast, top-down regulation is slower – perhaps too slow for some forms of progress in this area.
There is also a complex mix of AI products aimed at hospitals. Some can help with things like purchasing and internal reviews, but many others are clinical or clinically adjacent.
Some medical AI products interact directly with consumers, such as chatbots that people might use for their mental health. For this we do not even have an internal review of hospitals, and the need for regulation is much clearer.
With technology evolving so quickly, does speed even matter when it comes to regulation?
This is an innovation ecosystem that has a lot of startup energy, which is great. But you’re talking about something that can move extremely quickly, without a lot of internal review.
Whenever we enter into what I call a “racial dynamic,” there is a risk that ethics will be left behind quite quickly. Whether it’s a race to be the first to develop something, a startup race against lack of money, or a national race between countries trying to develop artificial intelligence, time pressures and urgency make it easier to overlook ethical questions.
The vast majority of medical AI is never reviewed by a federal regulator – and probably no state regulator at all. We want to have standards for AI in healthcare and an incentive to adopt standards.
But subjecting everything to the FDA’s rigorous process for drugs or even that for medical devices would in many cases be prohibitively expensive and prohibitively slow for those lured by Silicon Valley’s pace of development.
On the other hand, if they malfunction, many of these technologies pose a much greater risk to the general population than the average device on the market.
If you take aspirin or statins, there are differences in how they work in different people, but to a large extent we can characterize those differences in advance. When medical AI reads an x-ray or does something in mental health, how it is implemented is key to its performance.
Results can be very different among hospital systems, depending on resources, staffing, training, and the experience and age of the people using them. It is therefore necessary to study the implementation very carefully. That would create an unusual challenge for an agency like the FDA — which often says it doesn’t regulate the practice of medicine — because it’s complicated to know where approval of an AI system stops and the practice of medicine begins.
Your study examines a regulatory system suggested by the Joint Commission, a hospital accreditor, and the Coalition for Health AI. Would an accreditor naturally be something hospitals should – or should – pay attention to?
Exactly. In almost all states, to bill Medicare and Medicaid, you must be accredited by the Joint Commission. This represents a significant part of the business of almost all hospitals.
There is a rigorous process to qualify for accreditation, and from time to time you are reassessed. This is serious business.
The Joint Commission has not yet said that these AI rules will be part of our next accreditation, but these guidelines are a sign that they may be moving in that direction.
“I’m talking about legal and ethical issues in this area, but I’m optimistic about it. I think that in 10 years the world will be significantly better off thanks to medical artificial intelligence.”
Do you find that some recommendations leave something to be desired?
Some are more challenging than I expected, but I actually think they’re pretty good.
Requiring that, where appropriate, patients be informed when AI has a direct impact on their care, and where appropriate, consent to the use of an AI agent be obtained, is a strong stance to take.
Many researchers and other organizations do not believe that medical AI should always be disclosed when it directly impacts care, much less that informed consent should always be sought.
The guidelines also require ongoing quality monitoring and ongoing testing, validation, and monitoring of AI performance.
The frequency of monitoring would be tailored to the levels of risk in patient care. These are good things to do, but difficult and expensive. You will need to form multidisciplinary AI committees and constantly measure accuracy, errors, adverse events, fairness, and bias across populations.
If taken seriously, it will likely be impractical for many hospital systems in the United States. They will need to make a preliminary decision on whether or not to adopt AI.
You point out in your JAMA article that most hospitals in the United States are small community hospitals and that resources are a major issue.
People in large hospital systems who are already doing this tell me that properly vetting a complex new algorithm and implementing it can cost anywhere from $300,000 to half a million dollars. This is simply out of reach for many hospital systems.
There will actually be elements in the implementation that will be specific to each hospital, but there will also be elements that might be helpful to know about that are common across many hospital systems. The idea that we would conduct the assessment multiple times, in multiple locations, without sharing what we have learned seems like a real waste.
If the answer is, “If you can’t play in the big leagues, you shouldn’t fight,” that creates a divide of haves and have-nots in terms of access to health care. We already have that when it comes to health care in general in this country, but this would strengthen that dynamic at the hospital level.
Your access to AI that aids medical care would be determined by your membership in networks of large academic medical centers that are proliferating in places like Boston or San Francisco, rather than in other parts of the country that lack this type of medical infrastructure.
The goal, ideally, would be greater centralization and sharing of information, but these recommendations place much of the responsibility on individual hospitals.
Doesn’t a system that some hospitals can’t participate in negate the potential benefits of this latest generation of AI, which can help resource-poor places by providing expertise that might be lacking or hard to find?
It would be a shame if you had great AI that helps people and could be most beneficial in low-resource environments, and yet those environments aren’t able to meet regulatory requirements in order to be implemented.
It would also be a sad reality, ethically, if it turns out that we are training these models on data from patients across the country, and that many of those patients will never benefit from these models.
If the answer is that control and oversight of medical AI should be handed over to a larger entity, is it the government?
The Biden administration’s idea was to have “assurance labs” — private sector organizations that, in partnership with the government, could monitor algorithms to agreed-upon standards so health organizations could rely on them.
The Trump administration agrees on the problem but has indicated it doesn’t like the approach. They have not yet fully stated what their vision is.
This looks like a complex and rapidly changing landscape.
Complex, but also challenging and interesting.
I talk about legal and ethical issues in this space, but I’m optimistic about it. I think that in 10 years, the world will be much better off thanks to medical artificial intelligence.
Diffusion of these technologies into less resourced settings is very attractive, but only if we align incentives appropriately. This does not happen by chance, and it is important that these distributional concerns are part of any attempt to legislate in this area.
