For many, the question is no longer whether AI should be used in healthcare, but how regulation can keep pace with a technology that is already shaping care pathways.
UK patients are increasingly turning to generative AI tools to understand symptoms, interpret results and decide when to seek care. In fact, recent research suggests that there are as many one in four people in the UK use platforms like ChatGPT for health advice. Clinicians and providers are experimenting with AI to triage demand, improve safety, reduce administrative burden and personalize treatment.
Regulations must therefore accompany the continuation.
What AI is already doing in healthcare is broad but not yet deep
Across the UK’s NHS and private healthcare sector, AI is used to support early-stage clinical assessment, patient triage, operational planning and workforce optimization. These tools help clinicians focus their time where it matters most, while making care more accessible and responsive for patients.
However, patients want fast, accurate answers to their health questions when they need them, while also wanting to feel in control of their health information. This is arguably why we are seeing the rise of tools like ChatGPT Health and why we created Nu to provide evidence-based guidance within clinical guardrails.
However, two questions quickly emerge. First: how can we meet patient demand safely and at scale? Second: Clinical AI tools will continue to evolve – so how can we create forward-looking regulation?
Designing safe, patient-centered AI is essential.
We know that without clinically informed design, generic AI tools risk spreading misinformation and increasing patient anxiety, leading to disengagement and distrust.
The clinical opportunity, then, is not to remove humans from care, but to use AI to protect continuity, surface risks earlier, and put actionable insights in the hands of clinicians before it is too late.
For patient-facing tools, AI in healthcare should be built around escalation rather than replacement. For example, Numan’s Aegis Monitoring flags risks and engages clinicians when needed, because AI is most effective when it supports, not replaces, clinical judgment.
In both cases, the approach is to create safe and scalable AI that patients can trust with their health information and concerns.
This is also how the use of AI can move from innovation to early intervention.
For many, the true value of AI lies in its ability to spot patterns in large and complex data sets, pointing out warning signs that humans may overlook. Done right, it allows for earlier action and more personalized care, while potentially redefining how we approach preventive medicine. But unlike traditional software, AI will not remain static. It learns, adapts and evolves, which requires a very different way of designing and governing care.
The generative capabilities of AI are already being explored, including through the UK’s MHRA AI airlock and alternative validation approaches. These real-world testing environments allow regulators, providers and innovators to learn together, generating evidence while systems are actively used, rather than relying solely on static pre-deployment assessments.
However, even with these early initiatives showing a clear appetite for delivery and innovation, without regulatory approaches recognizing AI as a continuous learning system, the UK risks blunting the very capabilities that could unlock earlier intervention and more preventative models of care.
The regulatory challenge in 2026 is clear.
AI innovation in healthcare currently remains limited by fragmented oversight, lack of clarity and uncertainty. In England alone, responsibility for AI is spread across multiple regulators and oversight functions, before wider government interests are even considered. The result is a fragmented system without clear, end-to-end ownership of AI-based care.
This fragmentation makes it difficult for innovators – especially those operating responsibly – to understand what good compliance looks like in practice. Even security-focused organizations can find themselves facing overlapping expectations, inconsistent guidance, and slow paths to real-world deployment.
Access to data compounds this challenge. For AI systems to perform essential clinical and safety functions, they must have access to accurate and up-to-date patient records. Today, access to NHS data is inconsistent, while private providers generate increasing volumes of clinically relevant information that often cannot be shared with the wider system.
We need a more open, human-centered system that aims to remove barriers to innovation.
An open banking approach to health data, in which patients control how their information is shared among providers, could help bridge this gap. Consent-based access to data would support better coordination between NHS services, private providers and patients, while enhancing safety and continuity of care.
This is not an argument for autonomous AI. Health care remains complex, contextual and deeply human. Effective AI governance therefore depends on clear accountability, robust oversight, and meaningful human oversight.
The ideal AI landscape relies on alignment: proportionate regulation, real-world testing environments, and data frameworks that reflect how care is actually delivered today. Responsibility in health care should not mean inertia. It should be a tool to shape innovation so that it brings tangible benefits to patients, clinicians and the health system.
AI has the potential to make healthcare more proactive, personalized and resilient, but only if regulations evolve in tandem.
Jamie Smith Webb, technical director at Numan.
