The health sector has spent billions on automation over the past few decades, but for many, AI has not brought the expected efficiency or financial returns. Recent searches of MIT found that 95% of organizations have so far reported no ROI from their AI programs.
One of the main causes of this problem is that many solutions and workflows are disconnected. Healthcare organizations have invested in automation without accountability and implemented AI systems that lack the context needed to operate reliably. Instead of reducing friction, these tools can actually increase administrative burden, while leaving the organization vulnerable to life-threatening errors.
AI is a tool. Humans are responsible. But we can build accountability into AI systems by prioritizing data integrity, human oversight, and continuous learning. When AI is honest and acts as a connector in healthcare workflows, clinicians’ time is freed, accuracy is ensured, and revenue is protected.
Data integrity is key to responsible AI
To keep these systems honest, healthcare organizations must ensure their data is well-governed and well-contextualized. Today, many healthcare organizations ignore this context. When clinical and operational data resides in separate point solutions or in existing EHRs that do not communicate with each other, AI agents cannot operate with the appropriate context needed to produce accurate and reliable results. Using an AI agent that operates on partial data is like driving with blinders on, and in healthcare, where every decision has real-world consequences, guesswork is not an option.
Data interoperability is the starting point for responsible AI. When healthcare organizations unify data across point solutions, AI is able to operate across context, streamlining workflows and reducing administrative friction. In turn, the patient experience is improved.
Balancing AI and human surveillance
Successful integration of AI into healthcare requires a balance of technology and human expertise, with agentic AI changing the level of human oversight needed in healthcare. Autonomous systems are able to act proactively and manage complex processes, like reporting a missing preventative care task or submitting a prior authorization request, but that doesn’t mean human oversight isn’t still necessary. Human experts must be involved as strategic supervisors and final decision-makers, and give healthcare professionals time for patient care and other similar high-value work. By leveraging the analytical powers of AI with human expertise and empathy, healthcare organizations can create a system that empowers both patients and clinicians.
Strengthening AI judgment through continuous learning
The healthcare industry is constantly evolving with new regulations and clinical guidelines to follow, as well as changing patient expectations. A responsible AI tool is able to adapt to the industry through continuous learning and feedback processes.
Continuous learning gives AI the real-world clinical, technical, and emotional context needed to make informed decisions. Staff can strengthen AI performance over time with feedback that reinforces appropriate and compliant AI outcomes, helping to avoid the common pitfall of context-free capabilities. For example, when using an AI medical coder, a human auditor must review the AI results and provide feedback to train the AI to be highly accurate. Continuous learning not only ensures accuracy, but can also make AI tools easier for clinicians to use. Using feedback, a clinician using an ambient listening scribe can train the AI to format clinical notes in their preferred style so that the AI fits more easily into their workflow.
Continuous learning creates a valuable feedback loop that improves speed and quality. Continuous feedback from clinicians and staff can strengthen the AI’s judgment, allowing it to perform its tasks more quickly and reliably.
Accountability is the new AI metric
Keeping AI honest is not about slowing down innovation, but about creating systems that support clinicians, patients, and healthcare as a whole. The future of healthcare will be shaped by organizations that adopt AI-enabled, connected workflows while preserving human expertise. Everyone benefits when AI is responsible, contextual and integrated. Organizations reduce inefficiencies and protect revenue, patients benefit from smoother access to care and faster approvals, and clinicians have more time to do what they were trained to do: care for patients.
Photo: Panya Mingthaisong, Getty Images
Ajai Sehgal is Director of AI at IKS Healthleading the organization-wide AI vision and strategy to leverage data, analytics and advanced technologies to accelerate innovation, improve outcomes and amplify impact across the healthcare ecosystem. A seasoned leader with experience ranging from startups to Fortune 100 companies, Ajai was most recently the inaugural Chief Data and Analytics Officer at Mayo Clinic, where he led the use of more than a century of clinical data to power groundbreaking medical innovations and improve patient care. He also served as chair of digital technology at Mayo Clinic’s Center for Digital Health.
Ajai’s global leadership experience includes senior technology roles at EagleView, Hootsuite and The Chemistry Group, overseeing data and analytics, software engineering, IT, security and operations. Earlier in his career, he served 16 years in the Royal Canadian Air Force before joining Microsoft, where he played a key role in creating and transforming Expedia into the world’s largest travel company. A strong advocate for responsible AI innovation, Ajai continues to guide and advise the broader technology community.
This message appears via the MedCity Influencers program. Anyone can post their views on healthcare business and innovation on MedCity News through MedCity Influencers. Click here to find out how.

