The healthcare sector continues to face increasing constraints, including increased patient demand, chronic diseases and a lack of resources. Artificial intelligence (AI) and machine learning (ML) tools are becoming increasingly accessible and can help alleviate these constraints.
AI can be a useful tool for providers by allowing them to focus their energy on administrative tasks to direct patient care. He can assist with coding and billing or transcribing visit notes into patient medical records. AI can even be used to aid diagnosis, drug discovery, and personalized care.
At the same time, there are inherent risks associated with using AI in a healthcare practice. AI technology can create algorithmic bias, data privacy compliance issues, or security and transparency concerns. AI uses algorithms to find complex nonlinear correlations in a massive data set, but the AI does not illustrate how it arrives at its results. AI may also have algorithmic errors because it may continue to generate work product based on previous data that is no longer accurate or complete.
In the healthcare sector, AI errors pose risks to patient safety and health and raise the question of who is accountable for any type of harm caused. As such, regulators will expect a level of human intervention with AI technology to limit these risks. Therefore, healthcare organizations must have robust AI policies and procedures in place.
Compliance practices
It is important for healthcare organizations to establish appropriate AI compliance practices. First, when starting the procurement process, do your homework. What AI tool are you implementing and how do you plan to use it? Is it safe, responsible, valid and reliable? Does the AI technology meet all the necessary privacy and security requirements? What is the scope of the data used by the AI system? You will also want to make sure that you have carefully reviewed and negotiated the contractual license agreement for the AI system.
Second, once you have selected an AI system for your organization, you need to have a post-implementation monitoring process in place. Continuous monitoring is necessary to ensure that the AI system does not deviate from its intended purpose. You should also implement regular updates to the system to ensure it runs smoothly. In most cases, you will need a provider to make the final clinical decision. You will want to have a written AI compliance plan in place and, depending on the size of your organization, you may also want to create an AI governance committee. You need to train your employees who use AI technology and may want to conduct periodic audits or risk assessments as a form of monitoring the technology.
Third, there may be certain legal requirements for notice and consent. You should ensure that you have policies in place to inform patients about the use of AI technology and obtain their prior consent for the use of the technology, if necessary. You also need to educate your patients about the limitations of AI and the potential for errors so they can make voluntary, informed decisions about their care.
Application risks
Healthcare organizations should consider applicable regulations that may impact the use of AI in their practice or organization, such as HIPAA rules, FDA regulations, the False Claims Act, and state-level legislation.
If AI technology does not meet HIPAA privacy and security requirements, healthcare organizations could face enforcement action from the HHS Office for Civil Rights.
The Food and Drug Administration may also play a role in regulating and approving AI/ML medical devices. Healthcare facilities will want to ensure they are using approved devices and that AI/ML devices do not exceed the limits of their approved services. Sanctions may be imposed on health organizations that market unlicensed services.
At the state level, the state attorney general can intervene if providers use AI tools in a way that misrepresents their services, violates privacy, or enables discrimination in the delivery of patient care.
Improper use of AI may also result in False Claims Act enforcement action. For example, the Department of Justice recently conducted investigations into pharmaceutical companies and digital health companies regarding their use of AI in electronic health record systems to determine whether the AI tool resulted in excessive or medically unnecessary care.
What awaits us?
Healthcare organizations will need to continue to monitor the ever-changing AI regulatory landscape. We have seen some states pass AI legislation, such as Utah, Texas, California, and Virginia. Most recently, on December 11, 2025, President Trump signed Executive Order 14365, “Ensuring a National Policy Framework for Artificial Intelligence,” aimed at reducing state-level regulation and instead establishing a unified federal approach to regulating AI.
