
AI is no longer a future concept and has become a practical tool helping hospitals and health systems expand care, ease pressure on the workforce, and improve patient engagement. From virtual nursing models that expand clinical capacity to AI-driven automation in call centers and back-office functions, healthcare organizations are finding new ways to manage workloads, reduce costs and improve the patient experience.
Despite its promise, AI in healthcare is not without risk and the stakes are high. Without strong safeguards for data integrity, cybersecurity, and regulatory compliance, the same tools that can build patient trust and quality of care can just as easily undermine them. Like any clinical intervention, AI must be deployed with clear guidance and human oversight. The organizations that are most successful with AI are not those that adopt it the fastest, but those that do so responsibly, combining innovation with rigorous security and accountability.
The growing role of AI and growing risks
AI is reshaping almost every aspect of healthcare. Providers use it to automate tedious tasks such as documentation and to facilitate diagnosis and care recommendations. Payers also use it to streamline claims processing, identify fraud, and manage appeals.
Research from the American Medical Association found that the use of AI by doctors in their practice almost doubled in 2024from 38% to 66% – an “unusually rapid” technology adoption rate for the industry. The same report reveals that doctors are increasingly familiar with various use cases of AI, such as triage assistance, clinical documentation, surgical simulations, and predictive analysis of health risks and treatment outcomes.
While this growth demonstrates how AI can increase efficiency and improve healthcare, it also highlights the growing need for responsible oversight, particularly when it comes to patients. Behind each AI-based diagnostic workflow or recommendation is a complex set of algorithms that interpret data and detect patterns to generate insights. These systems are increasingly deeply embedded in clinical decision-making, and as their influence expands, it is critical to understand their potential for bias and inaccuracy.
Algorithms are objective by design, but their quality depends on the data they are trained on. Even systems that work well in testing can have unintended consequences in the real world. For example, some major insurers faced lawsuits in recent years, for using AI algorithms, they have allegedly wrongly denied coverage for medical services. Biases in AI systems, as well as hallucinations and other factors, can lead to these results even in the absence of intent or awareness of harm. Therefore, ongoing human oversight, ethical governance, and transparent communication with patients about how AI informs their care are still necessary.
Securing AI in an era of increasing vulnerability
Cybersecurity is another essential part of this conversation, but it is often overlooked. Every digital health innovation relies on sensitive patient data, and as AI adoption grows, so does the scale and sensitivity of this data.
The February 2024 ransomware attack on Change Healthcare, the largest medical data breach in U.S. history, made this reality painfully clear. Hackers used stolen credentials to access an account without multi-factor authentication, crippling claims and care processes nationwide and affecting approximately 190 million people. The incident revealed a new truth in the AI era: Patient safety now depends as much on cybersecurity as it does on clinical care.
As healthcare delivery becomes increasingly dependent on technology, organizations must strengthen their defenses against growing cyber threats. Building resilience starts with fundamental security practices, including:
- Staff training: A well-informed workforce is the foundation of strong security. Regular training sessions and phishing simulations tailored to the department’s needs help foster a culture of awareness, accountability and continuous improvement.
- Multi-Factor Authentication (MFA): Multi-factor authentication should be mandatory for all system access, ensuring a critical second line of defense if credentials are compromised.
- Supplier Verification: Supply chain attacks remain one of the top threats to healthcare cybersecurity. Continuous monitoring of third-party partners helps identify vulnerabilities before they spread.
- Incident response planning: Attacks are not a question of if, but when. A tested and well-practiced response plan is essential to minimize disruption and maintain continuity of care.
- AI-based defense: AI is not limited to clinical use. Healthcare organizations can also deploy AI-powered security tools to automate threat detection, sift through alerts, and streamline incident response. Even resource-constrained IT teams can use these tools to build resilience against modern threats.
These practices are crucial for data and patient protection. Security, privacy and reliability are all prerequisites for safe and effective deployment of AI.
The way forward: secure innovation for better care
As AI becomes increasingly integrated into healthcare operations, organizations must integrate governance and cybersecurity at every level, from data acquisition and model training to deployment and monitoring. Aligning AI governance with cybersecurity programs ensures that innovation advances without compromising security or trust.
When implemented responsibly, AI can deliver tangible value: faster workflows, more accurate diagnoses and, most importantly, better patient outcomes. But the organizations that truly lead the next era of healthcare will be those that view security, transparency, and oversight not as compliance checkboxes, but as the very foundation of innovation.
The future of health care will be shaped by those who act quickly and build safely. Health officials must ensure that AI fulfills its ultimate mission: to provide safe, effective and patient-centered care.
About Scott Lundstrom
Scott Lundstrom is the senior healthcare and life sciences industry strategist at Open texta company that helps organizations securely manage and connect data across the enterprise, transforming data into trusted, AI-ready insights. Scott Lundstrom is a long-time industry analyst, CIO and software developer supporting complex regulated businesses in healthcare, life sciences and consumer goods. At AMR, Scott contributed to the original SCOR model and helped launch the Top 25 Supply-Chain program. Scott founded the healthcare industry practice at IDC Research and led that group for 13 years. Scott has also held leadership roles in research focused on AI, cloud, SaaS, enterprise applications and analytics.
