Artificial intelligence appears almost daily in health news. One story touts a diagnostic breakthrough, another a scheduling shortcut, another a digital assistant promising to cut paperwork in half, it all looks transformative on the page. In practice, the friction points are smaller but more difficult: a stroke patient lost in referral transfers, a cardiology follow-up delayed because the systems can’t communicate, or a clinician hesitant to trust an unknown result. Mathematics is rarely an obstacle. The context is.
During my doctoral work in neuroimaging, the technical models worked as expected; however, the barriers to real-world use were different. Imaging data could not be integrated into hospital records, regulatory approval processes were slow, and clinicians had little reason to trust results they did not fully control. Two decades later, the same hurdles of interoperability, regulation, and clinician trust still determine how artificial intelligence moves from research to practice.
The growing gap between promise and reality
Investments in artificial intelligence for healthcare continue to grow. A market forecast estimates the sector will reach $173.6 billion by 2029, with annual growth above 40 percent. Surveys suggest high expectations: more than four in five healthcare stakeholders believe AI will influence clinical decisions, and a similar proportion expect it to do so. reduce labor costs through automation. Another report notes that 86% of organizations already describe themselves as using AI, with projections that the market will surpass this. $120 billion by 2028.
However, hospital adoption remains limited. A PubMed 20221 The analysis found that fewer than one in five U.S. hospitals had implemented AI tools, and only about 4% were using them at a more advanced level. In clinical diagnosis, less than one establishment in five reported significant success. Even among hospitals that use predictive models, about two-thirds apply them to inpatient pathways, outpatient risk scoring, or planning assistance. Results remain uneven and confidence in the results is limited. The difference between projected growth and experienced adoption remains the key question.
Structural barriers, not technical
The obstacles slowing adoption are not primarily technical. Clinicians describe their patient care needs, such as continuity after a stroke or reliable monitoring of cardiology patients. Transforming these clinical needs into software is not simple. It depends on people who understand how medicine and technology can work together over time. Decisions about payment and procurement are typically made by administrators who are often removed from the clinical floor, which can result in costs and benefits being allocated to different groups. Even FDA-approved tools can lose credibility when they fail to perform properly in outside controlled trials. Culture adds another layer, as healthcare systems tend to be risk-averse and heavily regulated. Without direct support from leaders, progress slows.
Lessons from real pilots
During the COVID-19 pandemic, I helped lead a study to test wearable sensors for early detection of infection. The devices tracked signals such as heart rate and temperature, and the model predicted infection with about 82% accuracy, often days before symptoms appeared. The results were promising, but hospitals and regulators were hesitant. Few were willing to inform patients that they could still be contagious even when they felt well. Experience has shown that technical success alone does not guarantee adoption. Trust, regulation, and workflow readiness determine whether new tools are adopted.
What leaders need to do differently
AI in healthcare should not be treated as an isolated innovation project. It’s a question of leadership and strategy. Executives should pressure vendors for pilot results that demonstrate how tools work in real operations, rather than relying solely on polished white papers. They should focus their efforts on service areas such as neurology, cardiology and oncology, where even modest improvements affect both patient outcomes and financial performance. It is equally important to develop understanding within the organization. Clinicians, compliance officers and board members all need to have a clear vision of what artificial intelligence can and cannot do if trust is to be increased. Adoption planning should start at the design phase, with interoperable records, aligned data flows, defined governance, and feedback systems that allow models to evolve. AI can ease administrative burdens and flag patients at risk of falling through the cracks. It cannot replace empathy or judgment at the bedside. These qualities remain the foundation of care and must be preserved.
Moving forward
Artificial intelligence will only matter in healthcare if strong technical work is accompanied by consistent operational leadership. Success requires realism about what systems can handle and patience to align processes that were never designed to share information. Interoperability is central. Without this, algorithms remain confined to trials and pilot projects. With it, they can be integrated into daily workflows and provide visible results for patients and providers.
References
- Pew Research Center. How Americans view the use of artificial intelligence in health and medicine. Pew Research Center; 2023. Available at: https://www.pewresearch.org/science/2023/12/12/how-americans-view-the-use-of-artificial-intelligence-in-health-and-medicine/
- PubMed Central (PMC). Studies of wearable sensors for infection detection. PubMed Central (PMC); 2023. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/
