From our partner
This article is sponsored by our partners at Netsmart
The role of artificial intelligence (AI) in healthcare continues to evolve at a rapid pace, and healthcare leaders are rightly responding with optimism and caution.
From large hospital systems to community behavioral health providers, leaders are closely watching AI’s potential to transform care delivery. They ask specific, measurable questions: Will this technology reduce administrative burden? Can it improve patient outcomes? Is it reliable, explainable and safe?
According to a 2024 HIMSS-Medscape report86% of healthcare systems now use some form of AI, with the majority applying it to detect clinical patterns and generate insights that humans might overlook. A separate McKinsey Analysis 2024 showed that more than 70% of healthcare stakeholders, including providers, payers, and health technology vendors, are actively adopting generative AI solutions. These trends reflect a broader industry movement toward digital transformation, but they also highlight a growing list of expectations.
Simply put: healthcare leaders don’t look for new things. They are looking for value and efficiency. They want AI to solve real problems without creating new ones, and they’re setting the bar higher than ever. The following themes reflect key expectations that emerged from field information and conversations with leaders in healthcare settings.
1. Clinician support and human-centered care
One of the most commonly cited goals for AI adoption is to reduce administrative burden on clinical staff. Documentation remains one of the biggest contributors to burnout, often requiring evenings and weekends. Leaders are calling for AI tools that streamline repetitive tasks, present relevant information in context, and allow clinicians to spend more time on direct patient care.
In one studyclinicians using AI scribes like Bells Virtual Scribe we saved an average of three hours per week on out-of-hours documentation, a clear indication that ambient AI can reduce administrative burden and enable clinicians to be better engaged during patient encounters.
2. Transparency, trust and governance
For healthcare organizations to make progress in AI, they must have confidence in how decisions are made and how data is managed. Leaders are increasingly demanding that AI tools be explainable, verifiable, and governed by clear ethical standards. Concerns include where data is stored, who has access to it, and how AI-generated results influence care decisions.
This demand for transparency is rooted in a growing awareness of the limits of AI. High-profile examples of AI-related hallucinations, including misdiagnoses of fictitious conditions, have reinforced the need for robust clinical monitoring and control. A 2025 Philips survey found that 63% of clinicians believe AI can improve patient outcomes, while only 48% of patients agree.
However, when clinicians explain how AI is used, 79% of patients declare increased comfort. This highlights the importance of trust and transparency at all levels.
3. Integration with existing systems and workflows
AI solutions should enhance, not complicate, existing clinical workflows. This means seamless integration into electronic health records (EHRs), billing systems and quality reporting tools. Poorly integrated tools risk duplicating efforts and disrupting workflow, which can reduce adoption rates and compromise quality of care.
Healthcare leaders are placing a high emphasis on AI that supports interoperability and connects to a broader data ecosystem. Integrated tools that generate real-time suggestions in the EHR, alert providers to gaps in care, or flag compliance issues are much more likely to be adopted. Integration is not a technical preference, it is a strategic necessity.
4. Real-time intelligence to improve results
Healthcare organizations are looking for actionable insights that can be applied in the moment, not weeks later. AI-based tools that identify at-risk patients, suggest rapid interventions, or alert healthcare teams to emerging gaps in care are considered highly useful. These applications go beyond retrospective reporting and support a more proactive and predictive approach to care delivery.
This real-time decision support is particularly crucial in high-stakes areas such as behavioral health, chronic disease management and acute care coordination. Leaders want AI to function as a clinical partner, providing insights that improve both the quality and timeliness of care.
5. Mitigation of financial and compliance risks
With the rise of value-based care and increasing payer oversight, financial risk is an ever-present concern. AI can play a key role in ensuring document compliance, identifying missing items before claims are submitted, and reducing the likelihood of audits or rework.
Leaders are keenly aware of the financial implications of documentation gaps. By quickly flagging potential issues, AI can help protect reimbursement rates and preserve both revenue and clinical integrity. In this context, AI becomes not only a clinical tool, but an essential part of revenue cycle management.
A clear set of expectations
As the pace of AI adoption accelerates, so do the expectations placed on it. Healthcare leaders no longer evaluate AI based on its potential alone: they demand measurable results. They want tools that are explainable, interoperable and clinically meaningful. They want partners who provide transparency, governance and staff support.
Above all, they want technology that aligns with the values of healthcare itself: security, trust and human connection.
About the author
Chris Cotton is the Director of Customer Development for Netsmart.
