
Panelists, from left: Lucila Ohno-Machado, Andy Beam, Milind Tambe and moderator Carey Goldberg
May 1, 2024 – Artificial Intelligence (AI) in health care can be very beneficial, or even very problematic, if we are not careful how it is used, said experts at Harvard’s TH Chan School of Public Health. event.
AI could help provide diagnoses for certain medical conditions or supplement the work of healthcare organizations, according to panelists who spoke at the in-person and livestreamed event, held on June 30 April at the Harvard Chan Studio. But it could also provide biased information or be used to spread misinformation, they said.
Speakers included Andy Beamassistant professor of epidemiology and associate editor at the Harvard Chan School of NEJM AI; Lucila Ohno-Machado, associate dean of biomedical informatics and chair of biomedical informatics and data science at Yale University School of Medicine; and Milind Tambe, Gordon McKay Professor of Computer Science and director of the Center for Computing and Society at Harvard University, and principal scientist and director of “AI for Social Good” at Google Research. Carey Goldberg, science, health and medicine journalist and co-author of “The AI Revolution in Medicine: GPT-4 and Beyond” – was the moderator.
On the positive side, AI can provide medical expertise to people who don’t have access to it, Beam said. “If you live in a rural part of the country and your nearest doctor is three hours away, you can at least get access to a facsimile (of medical expertise) quickly, cheaply and easily,” he declared.
AI can also help speed up diagnoses in mental health, Beam added. For example, he said: “On average, a person with bipolar disorder type 1 goes undiagnosed for seven years. This can be a very difficult seven years. This may manifest in (the person’s) family through (something like) drug addiction, and there is no clear indication of what is happening. Access to AI can lead to faster diagnosis and improve the quality of life of the person with this disease, Beam said.
He noted that there have been documented cases of people on “medical odysseys” – those who struggled for years to find a diagnosis for a mysterious medical illness – who found what they were looking for thanks to AI .
Tambe said AI can be beneficial in the field of mobile health. For example, a nonprofit he works with in India, called ARMMAN, runs a mobile health program that sends automated messages to pregnant women and new mothers, such as reminders to take iron or calcium supplements. . AI was able to help the organization determine which women to focus its interventions on, he said. Tambe also noted that an organization hoping to increase vaccine uptake might be able to use AI to determine how best to do so, such as recommending who to target for interventions such as travel vouchers or reminders.
Although AI could bring efficiencies, Tambe cautioned that he would not want it to be used “in a way that eliminates human contact where it is absolutely necessary.”
This theme – to be followed with caution when it comes to AI – was echoed by the other panelists. “In terms of diagnosis, if you want to generate hypotheses, (AI) can help you,” Ohno-Machado said. “If you’re just relying on AI to make the diagnosis, I think we’re not there yet.”
Beam said one of his biggest concerns about the use of AI in healthcare, and in general, is misinformation. “We now have open source (AI) models as powerful as GPT-4 – the model behind ChatGPT (the best known AI system) – and there are virtually no safeguards that would prevent a bad actor from using it to spread disinformation,” he said. This means you could be chatting with someone on the internet without being able to know whether that person is real or whether they are an AI-generated model designed to trick you into believing something false, he said. he declared.
Bias is another concern. Training sets (curated data used to train AI models to learn patterns and relationships) can be biased, Ohno-Machado said. “We can try to change the algorithms (that drive AI), but there is no substitute for high-quality training data and quality model building.” Beam agreed. “There are inherent biases in the healthcare system that are codified in the data, and (AI) automates and operationalizes (those biases).” It’s important to make sure “we’re teaching our models what we actually want them to learn, as opposed to what’s coded in the data,” he said.
Panelists recommended other ways to ensure AI is used safely and responsibly in health care, such as carefully evaluating AI models, asking the Food and Drug Administration to regulate models as medical devices and training people working in AI on how to use this for social good.
For Beam, the best-case scenario for AI in healthcare would be that it eventually works in the background “so that my life and my interactions with the healthcare system are smoother: they are faster, they are less expensive. , and they are better. He hopes that AI will be able to do things like help systematize the large amounts of evidence in the medical literature or provide real-time monitoring of air quality conditions – that it will become “something that can bring (all this) together into a coherent whole.” all and give you concrete and simple advice to follow in real time.
Learn more
Misinformation doesn’t need to have the last word (Harvard Public Health Review)
New journal and podcast take a closer look at artificial intelligence in medicine (Harvard Chan School News)