The rapid deployment of artificial intelligence (AI) in healthcare has outpaced efforts to secure it. While AI systems promise faster diagnoses and more effective care, they also introduce new vulnerabilities that attackers can exploit, with potential consequences for patients and public trust.
These vulnerabilities are examined in a study entitled Medicine in the age of artificial intelligence: cybersecurity, hybrid threats and resilience, published in Applied Sciences. The research warns that without resilience by design, AI-based medicine could become a high-risk target in an increasingly hostile cyber environment.
The authors warn that health systems are adopting AI faster than they are strengthening the institutional, technical and regulatory safeguards needed to defend it. As a result, the same technologies designed to improve care can also expose hospitals and patients to unprecedented forms of harm.
AI expands healthcare sector’s cyberattack surface
AI is significantly expanding the cyberattack surface of healthcare systems. Traditional medical technologies were often isolated or analog, limiting the extent of potential harm from external interference. On the other hand, AI-based medicine depends on continuous data streams, networked devices, cloud infrastructure, and automated decision pipelines. Each of these elements introduces new points of vulnerability.
The authors explain that AI systems rely heavily on large volumes of sensitive data, including medical images, genomic information and electronic health records. If this data is compromised, altered or poisoned, the consequences go beyond privacy violations. Manipulated information can lead to incorrect diagnoses, incorrect treatment recommendations, or delayed interventions. In AI-assisted medicine, data integrity becomes as critical as data confidentiality.
Medical imaging is highlighted as a particularly exposed area. AI models trained to detect tumors, fractures or organ abnormalities depend on standardized digital formats and automated workflows. Weaknesses in these systems can allow malicious actors to subtly modify images or metadata without immediate detection. Unlike overt system failures, these forms of interference can go unnoticed while quietly influencing clinical decisions.
The study also draws attention to ransomware and downtime attacks targeting hospitals. As AI systems become integrated into planning, diagnostics and resource allocation, disabling them can cripple entire facilities. The authors note that healthcare organizations are particularly attractive targets because downtime directly affects patient care, increasing pressure to pay ransoms or comply with attackers’ demands.
Notably, AI risks are not limited to external hackers. Insider threats, supply chain vulnerabilities, and poorly secured third-party software can all compromise AI-enabled healthcare systems. The complexity of modern medical AI ecosystems makes it difficult for institutions to maintain complete visibility and control over their security posture.
Hybrid threats blur the lines between cyber and clinical harm
The study highlights the rise of hybrid threats combining technical attacks and strategic manipulation. In this context, healthcare becomes a potential target not only for financial gains but also for political, economic or societal disruption.
Hybrid threats can involve coordinated cyberattacks, disinformation campaigns and the exploitation of regulatory or organizational weaknesses. The authors argue that AI systems amplify the impact of such threats by speeding up decision-making and reducing human oversight. When clinicians rely on automated results, the margin for detecting subtle manipulation narrows.
The paper highlights scenarios where AI-based diagnostics could be intentionally distorted to undermine trust in healthcare institutions or public health responses. During crises such as pandemics or natural disasters, compromised AI systems could spread uncertainty, delay care, or fuel distrust. These outcomes extend beyond individual patients and affect national resilience and social stability.
Another concern raised concerns the manipulation of training data used to develop medical AI models. If data sets are biased, incomplete, or intentionally corrupted, the resulting systems may perform unevenly across populations. This not only creates clinical risks, but also ethical and legal challenges, particularly when AI-based decisions disproportionately affect vulnerable groups.
The authors argue that hybrid threats exploit gaps between technical safeguards and institutional preparedness. Many healthcare organizations focus solely on complying with data protection regulations while underestimating broader security and resiliency challenges. This fragmented approach exposes systems to complex, multi-layered attacks that do not fit neatly into existing regulatory categories.
Building Resilience in AI-Driven Medicine
The study advocates a resilience-by-design approach that integrates cybersecurity, governance and clinical practice from the outset. The authors argue that resilience should be treated as a fundamental requirement of AI-based healthcare, not an afterthought.
A key recommendation is end-to-end protection of the AI lifecycle. This includes securing data collection, storage, model training, deployment and ongoing operation. Each step presents distinct risks, and failures at any point can compromise the entire system. Continuous monitoring, validation and auditing are touted as essential safeguards against accidental errors and malicious interference.
Human factors also play a key role in resilience. The study highlights that clinicians, administrators and technical staff must be trained to understand the limitations and risks of AI systems. Overreliance on automated results without critical evaluation increases vulnerability. Maintaining human oversight and clear accountability structures is essential, particularly in high-stakes clinical settings.
Governance alignment is highlighted as another critical challenge by the authors, suggesting that healthcare organizations operate within multiple regulatory frameworks that address data protection, medical devices, cybersecurity and AI governance. When these frameworks are implemented in isolation, gaps and contradictions can emerge. The study calls for integrated governance models that align technical standards with clinical accountability and legal accountability.
Faced with the evolution of the European regulatory framework, the study highlights the growing overlap between AI, cybersecurity and healthcare governance. The authors argue that effective compliance requires more than just meeting minimum legal requirements. Institutions must develop internal capabilities to adapt to evolving threats and regulatory expectations over time.
AI-enabled healthcare systems are part of critical national infrastructure, and their failure can have cascading effects on public health, economic stability and social trust. Protecting these systems requires coordination between healthcare providers, regulators, technology developers and security agencies.
