Twelve months ago, Marshall Runge wasn’t really skeptical. But he wasn’t a true believer either. He believed that generative AI would be useful in healthcare. A useful tool, a progressive step forward. He didn’t think it would go this quickly. He didn’t think it would change this much. He certainly didn’t expect to be sitting across from Karen Webster, CEO of PYMNTS, describing a world in which AI is already reshaping how hospitals operate, how doctors think, and how patients first seek help. And admit, with the frankness of a truly surprised person, that the transformation has only just begun.
“In one year,” he says, “I completely changed my mind. »
This confession carries weight when it comes from someone like Runge. He led Michigan Medicine, one of the nation’s leading academic medical centers, as dean and CEO. He is a cardiologist and physician-scientist whose career lies at the intersection of clinical practice and medical innovation.
He is not easily dazzled.
He has already seen waves of transformation occur in health care, and he has seen many promises dissolve in contact with the stubborn realities of the system. When someone like this says that the speed and scale of what’s happening with AI truly caught them off guard, it’s worth paying attention.
Advertisement: Scroll to continue
The Monday conversation that followed was part progress report, part honest debrief. A look at what AI already brings to healthcare, where it falls short, and what lies between its current promise and its long-term potential.
“Doctor GPT” and the new gateway to care
One of the most important changes Runge describes is happening not inside hospitals, but in the privacy of patients’ homes, before they even call a doctor.
Webster said it directly: patients “talk to ‘Doctor GPT’ as if they were talking to a real doctor.” They enter their symptoms, their fears, their questions into AI systems that respond immediately, synthesize information across thousands of variables and never make anyone feel like they’re wasting their time.
Runge does not reject this. He does it himself. It uses AI as a knowledge resource. And he was truly impressed by what it could do.
“AI thinks broadly,” he said. It can simultaneously keep in mind a patient’s age, medications and underlying conditions, making connections that a doctor running late and juggling a full workload might miss. He has seen AI unlock diagnostic possibilities that trained clinicians had not initially considered.
But the risks, he stressed, are real.
Excessive dependence. Misplaced trust. The seductive feeling of a confident response where clinical uncertainty is the honest truth. AI does not wear a stethoscope. He can’t read a room. He cannot sense that a patient is afraid or that something in his affect suggests that the problem is not what he describes.
Stop holding AI to a standard that medicine itself cannot meet
It is here that Runge’s thought becomes most striking. He clearly and unreservedly rejected the idea that AI must be error-free before it earns its place in the clinical setting.
“We cannot demand something that is simply unachievable,” he said. Medicine itself, practiced by the most prudent and experienced doctors, produces errors. Demand perfection from AI and hold off on its deployment until it arrives. is not a safety standard. This is a reason to do nothing.
What he wants is more structure. Certificate. Guardrail.
“I think everything we do medically with AI should be certified and have guardrails,” he told Webster.
He knows that the first adverse event attributed to AI will generate a firestorm, regardless of how the AI’s safety record actually compares to the baseline. He wants a setting that anticipates this moment rather than being destroyed by it.
“We’re in that middle zone,” Runge said. Not ready for AI as a standalone vendor. But we are truly and urgently ready to do more.
20% more capacity. Without hiring a single surgeon.
If the philosophical arguments seem abstract, Runge offers a figure to support them: 20%.
That’s how much one hospital increased operating room utilization after deploying AI to observe surgical workflows and predict when patients would move from the operating room to recovery. It’s a fun discussion about the differences between Dr. Speed and Dr. Slow, where the AI learned the doctor- and procedure-specific patterns and used them to orchestrate the system more precisely than human planning ever could.
Think about what 20% means in a hospital operating at or near capacity.
This means more patients helped, more procedures performed, more families told “we can see you next week” instead of “next month”. That means reducing waitlists that drive patients to emergency departments out of desperation rather than necessity. That means expanding what the system can do without increasing the workforce, a critical benefit as doctor shortages worsen across the country.
From this perspective, AI does not replace surgeons. This gives them more space to work.
The planning problem that no one talks about
We would be delighted to be your favorite source of information.
Please add us to your favorite sources list so that our news, data and interviews appear in your feed. THANKS!
Access to health care is generally presented as a problem of supply: not enough doctors, not enough nurses, not enough hours in the day. Runge reframed it. The planning, he said, was “chaos.” Appointment slots scattered in decentralized systems, invisible to each other, each managed in isolation. Centralization helped. AI does something more fundamental.
An AI system can instantly analyze a doctor’s entire calendar. Not just the obvious open slots, but also the underutilized pockets of time hidden within a week. He can find the gap between a canceled appointment and a missing lunch. He can adapt this gap to the patient who needs it.
“Access is key,” Runge said, speaking of both a practical observation and a moral observation.
Patients who can’t get an appointment on time don’t just wait. They worry. They delay care. Or they end up in emergency rooms, which drives up costs and consumes resources that should go elsewhere.
The real obstacle: it’s not the algorithm
Ask Runge what stands in the way of AI’s full impact on healthcare, and he doesn’t talk about computing power, data quality, or even regulatory frameworks. He talks about money. More precisely, a payment system that he described, with unusual frankness, as “completely flawed”.
The relative value units that determine how doctors are paid were designed several decades ago. They persist not because they reflect clinical value, but because they are administratively practical and deeply embedded. Transitioning to outcomes-focused models, paying for health rather than procedures, would require dismantling the measurement frameworks and compensation structures around which entire institutions have been built.
Reform, Runge acknowledged, will neither be quick nor easy.
But it’s not optional. AI can optimize a broken system. He can’t fix one.
What AI can’t do – and why it still matters
For all his conversion, Runge did not lose his clinical instinct. He was clear about where AI’s reach stops. Physical examination. Reading human emotion. The relational trust that builds over the years between a doctor and a frightened patient who is looking for someone to believe in.
“I’m not convinced that he’s really capable of reading human emotions,” he said. And he didn’t say it to attack AI, but to remind us of what medicine really is, at its best.
It is not an information distribution system. A human encounter.
The doctor of the future, in Runge’s vision, will need to know enough medical knowledge to interrogate AI results rather than just accept them, to understand why a recommendation is made, to push back when something is wrong, to make a clinical judgment that no algorithm has yet replicated.
And they will have to, more than ever, do what AI cannot do. Be present.
A year ago, Runge would have described AI as a promising tool. Today he describes it as an integrated infrastructure, already integrated into the operational and cognitive levels of health care, that is accelerating and is not yet near its ceiling.
For him, the question is no longer whether AI will transform medicine. The question is whether the institutions, payment structures, and regulatory frameworks surrounding medicine can transform quickly enough to enable it.
For all PYMNTS AI coverage, subscribe daily AI Newsletter.
