As healthcare organizations approach 2026, they will also herald a new era of AI regulation. While Congress has yet to pass comprehensive AI legislation and federal regulatory guidance is constantly evolving, states have stepped in to fill the void. The new year will see several new laws imposing disclosure, transparency and data protection requirements on those who develop, deploy or use AI in healthcare settings. This article highlights key laws that healthcare organizations should have in mind.
California: no more pretending to be a doctor
California has been particularly active in regulating AI in healthcare. Lean on AB 3030 And SB1120Taking effect in January 2025, the state added new requirements targeting AI systems that could mislead patients into believing they are interacting with licensed healthcare professionals.
As of January 1, 2026, AB 489 Prohibits developers and deployers of AI systems from using terms, letters, phrases, or design elements that indicate or imply that the AI has a health care license. The law also prohibits advertising or AI features that suggest care is provided by a properly licensed natural person when that is not the case.
What makes AB 489 noteworthy is its enforcement mechanism: Health care professional licensing boards now have jurisdiction over these violations and can seek injunctions under existing licensing law.
California also adopted BS243effective the same day, which regulates “companion chatbots” designed to provide ongoing interaction and emotional support. The law requires clear notification that users interact with AI and imposes protocols to (i) prevent responses regarding suicidal ideation or actions with the user that could encourage self-harm or suicidal ideation and (ii) provide notification to the user that directs the user to a crisis service provider if the user “expresses suicidal ideation, suicide, or self-harm.” Organizations offering mental health support apps, patient engagement chatbots, wellness platforms, or communications tools should pay close attention. California is not alone; Illinois, Nevada, and Utah have all begun regulating chatbots to varying degrees.
Texas: New Disclosure Requirements for AI Use
Meanwhile, Texas has enacted one of the most ambitious AI laws in the country. THE Texas Artificial Intelligence Responsible Governance Act (TRAIGA), enacted in June 2025 and effective January 1, 2026, establishes a broad range of governance and other requirements for the use of AI systems. However, it also contains specific disclosure requirements for licensed healthcare professionals. Under TRAIGA, practitioners must provide patients (or their personal representatives) with a conspicuous written disclosure of the provider’s use of AI in the patient’s diagnosis or treatment. This disclosure must occur before or at the time of the interaction. In an emergency, disclosure must be provided as soon as reasonably practicable.
In addition to the disclosure requirement, TRAIGA prohibits the use of AI systems that have the specific intent to discriminate against individuals based on protected characteristics, although disparate impact alone is not sufficient to establish discriminatory intent.
Enforcement of TRAIGA falls to the Texas Attorney General, who can impose civil penalties ranging from $10,000 to $200,000 per violation, with amounts varying depending on whether the violation is curable. These penalties can accrue daily for continued violations, so compliance is not something to put off.
TRAIGA follows a separate Texas law (SB 1188) that took effect September 1, 2025. SB 1188 authorizes practitioners to use AI for diagnostic or treatment purposes provided that the practitioner acts within the scope of their practitioner license and personally reviews all content or recommendations generated by the AI before a clinical decision is made. Like TRAIGA, SB 1188 also requires professionals to disclose the use of AI to their patients.
AI Transparency: What’s Under the Hood?
Beyond healthcare-specific requirements, several states are imposing broader AI transparency obligations that will impact healthcare organizations. For example, California’s AI Transparency Act (BS 942), also effective January 1, 2026, requires “covered providers” (defined as those with one million or more monthly users) to offer free tools that allow users to determine whether content was generated by AI. Telehealth platforms, patient portals, and healthcare marketing operations with large user bases should evaluate whether these requirements apply to them.
Likewise, California AB 2013 requires AI developers to disclose information about the data used to train their generative AI systems. Healthcare AI vendors should be prepared to answer questions about the data that makes up the clinical decision support, diagnostic, or communications tools they sell.
Organizations should not assume that suppliers “are themselves” in compliance. Deployers remain accountable, and contracts, due diligence practices and governance expectations must evolve accordingly. Key issues for vendor relationships now include training data sources, bias testing protocols, validation checks, and continuous performance monitoring.
Virginia model hits Midwest and New England
If consumer privacy laws come into force Indiana, KentuckyAnd Rhode Island as of January 1, 2026, look remarkably similar, this is not a coincidence. All three are based on the Virginia Consumer Data Protection Act (VCDPA), which has served as a model for national privacy legislation across the country. The VCDPA model provides consumers with the right to access, correct, delete and transfer their data, as well as the right to opt out of targeted advertising, data sales and, importantly for AI, profiling that produces legal or equally important effects.
These laws also require data protection impact assessments for high-risk processing activities, including profiling. The good news for HIPAA-regulated entities is that all three laws exempt protected health information and provide exclusions for covered entities and business associates acting within the scope of HIPAA. But this is not a general exemption for health organizations since it applies to data And activities regulated by HIPAA, not anything a healthcare organization does.
A brake on work: the decree of December 11
As health organizations prepared to come into compliance on January 1, the White House threw a curveball. On December 11, 2025, President Donald Trump signed an executive order titled “Ensuring a national policy framework for artificial intelligence” (the AI Executive Order), which aims to preempt state laws on AI and establish a “single national framework” for regulating AI.
The order directs the United States Attorney General to establish within 30 days an AI Litigation Task Force to challenge state AI laws that the administration determines are inconsistent with federal policy – including on the grounds that such laws unconstitutionally regulate interstate commerce or are preempted by federal regulations. The U.S. Commerce Secretary must identify “onerous” state AI laws within 90 days, and the order specifically cites Colorado’s AI law as an example of problematic state regulation.
What does this mean for the laws discussed above? Uncertainty. AI’s executive order does not immediately invalidate any state law, and critics have already suggested it would face legal challenges. But it does indicate that the federal government may actively oppose enforcement of certain national AI requirements, potentially including some of the laws that took effect on January 1, 2026.
For now, these state laws remain in effect. Organizations should continue their compliance preparations while closely monitoring federal developments. The patchwork of state regulations that gave rise to this AI executive order are unlikely to disappear overnight, and healthcare organizations operating in multiple states will need to carefully navigate this evolving legal landscape.
Next steps
Healthcare organizations developing, deploying or using AI should consider the following as they begin the new year:
- Audit patient-facing AI systems. Identify all AI tools that interact with patients and assess whether their design or functionality could be interpreted as implying licensure or human oversight that does not exist.
- Implement disclosure protocols. For organizations operating in Texas, develop workflows to ensure patients are informed about the use of AI in diagnosis or treatment before or at the point of care.
- Assess the applicability of privacy law. Determine whether consumer data processing activities fall outside the scope of HIPAA and may trigger obligations under the privacy laws of Indiana, Kentucky, Rhode Island, or other states.
- Continue to monitor developments in AI in healthcare at the state level. Currently, state lawmakers continue to propose bills regulating the use of AI in healthcare. These proposals include preventing unauthorized practice of medicine or other licensure-required professions and overseeing health insurers’ use of AI in utilization review, claims processing, and other areas of concern.
The patchwork of state AI regulations will only become more complex, and the December 11 AI executive order adds a new layer of federal-state tension. Organizations that invest in compliance infrastructure now will be better positioned to adapt as the legal landscape continues to evolve. Health Law Rx will continue to monitor these developments and provide updates as states, courts, and the federal government refine their approaches to AI governance in healthcare.
