How AI could transform the US healthcare system
Fox News Senior Medical Analyst Dr. Marc Siegel examines how artificial intelligence could dramatically improve the healthcare system, but emphasizes that human doctors are still needed in The Story’s equation.
NEWYou can now listen to Fox News articles!
Artificial intelligence is rapidly reshape health care. It now supports diagnostic imaging, clinical decision tools, patient messaging and back-office workflows. According to the World Economic Forum, 4.5 billion people still lack access to essential care, and the global shortage of healthcare professionals could reach 11 million by 2030. AI could help close this gap.
However, as AI is increasingly integrated into care, regulators are focusing on a simple question. Should patients be informed when AI plays a role in their care?
In the United States, no federal law requires extensive Disclosure of AI in healthcare. Instead, a growing patchwork of state laws fills the gap. Some states require clear disclosure. Others enforce transparency indirectly by limiting how AI can be used.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive offers straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM bulletin.
STATE-LEVEL AI RULES SURVIVE – FOR NOW – AS SENA PASSES MORATORIUM DESPITE PRESSURE FROM WHITE HOUSE

AI now supports many healthcare decisions, from patient communications to coverage reviews, making transparency more important than ever for trust and accountability. (Kurt “CyberGuy” Knutsson)
Why AI disclosure matters for trust
Transparency is not a technicality, it is a question of trust. Research across industries shows that people expect to be informed when AI affects decisions that matter to them. In the field of health, this expectation is even stronger. An analysis published by CX Today found that when AI use is hidden, trust erodes quickly, even when results are accurate.
Health care depends on trust. Patients follow treatment plans, share sensitive information, and remain engaged when they believe care decisions are ethical and responsible.
How AI Disclosure Relates to HIPAA and Informed Consent
Although HIPAA does not directly regulate artificial intelligence, its principles still apply. Covered entities must clearly explain how protected health information is used and protected.
When AI systems analyze or generate clinical information using patient data, non-disclosure can compromise this goal. Patients may not fully understand how their information influences care decisions.
Disclosure also supports informed consent. Patients have the right to understand the material factors that influence communication about diagnosis, treatment or care. Just as clinicians disclose new medical procedures or devices, the meaningful use of AI must be explained, so patients can ask questions and stay engaged in their care.
AI TOOLS COULD WEAKEN DOCTORS’ SKILLS IN DETECTING COLON CANCER, STUDY SUGGESTS

States are stepping in where federal rules fall short, creating new disclosure requirements when AI influences care access, claims, or treatment decisions. (Kurt “CyberGuy” Knutsson)
What does AI disclosure mean in healthcare?
AI disclosure means informing patients or members when artificial intelligence systems are used in health care decisions. This may include clinical messages, diagnostic aids, utilization review, claims processing, or coverage determination. The goal is transparency, accountability and patient trust.
Health care activities most likely to trigger a disclosure
According to Morgan Lewis’ analysis, disclosure requirements most often apply when AI is used to:
- Clinical communications for patients
- Usage review and usage management
- Claims processing and coverage decisions
- Mental health or therapeutic interactions
These areas are considered high impact because they directly affect access to care and understanding of health information.
Risks of not disclosing the use of AI
Healthcare organizations that fail to disclose their use of AI face real consequences. These include increased risk of litigation, reputational damage and erosion of patient trust. Ethical concerns about autonomy and transparency may also trigger regulatory review.
MORE AMERICANS ARE TURNING TO AI FOR HEALTH ADVICE

Clear AI disclosure helps patients stay informed and engaged, reinforcing that licensed healthcare professionals remain responsible for every medical decision. (Kurt “CyberGuy” Knutsson)
How States Shape AI Disclosure Rules
States are taking different paths to regulating AI in healthcare, but most are starting with a common goal: greater transparency when technology influences care.
California focuses on communications and coverage decisions
California has taken one of the most comprehensive approaches.
AB 3030 requires clinics and physician offices that use generative AI for patient communications to include a clear disclaimer. Patients should also know how to contact a healthcare professional.
SB1120 applies to health plans and disability insurers. This requires safeguards when AI is used for usage review purposes. It also mandates disclosure and confirms that licensed professionals make decisions of medical necessity.
Colorado regulates high-risk AI systems
Colorado’s SB24 205 targets AI systems considered high risk. These are tools that significantly influence decisions such as approval or denial of health services.
Entities must implement safeguards against algorithmic discrimination and disclose the use of AI. Although broader than just clinical care, the law directly affects patient access decisions.
Utah emphasizes mental health and regulated services
Utah has tiered disclosure rules that intersect with health care.
HB 452 requires mental health chatbots to clearly disclose the use of AI. SB 149 and SB 226 expand disclosure requirements to regulated professions, including health care professionals.
This approach ensures transparency of therapeutic interactions and clinical services.
Other States Expanding AI Transparency
Several other states are moving in the same direction. Massachusetts, Rhode Island, Tennessee, and New York are all considering or implementing rules requiring disclosure and human review when AI influences use review or claim outcomes. Even when clinical diagnosis is not covered, these laws increase accountability where AI affects access to care.
What does this mean for you
If you are a patient, expect more transparency. You may see information in messages, coverage reviews, or digital interactions. If you work in healthcare, AI governance is no longer optional. Disclosure practices must align with clinical, administrative and digital systems. Training staff and updating patient advisories will be as important as the technology itself. Trust will increasingly depend on how openly AI is introduced into care.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Take my quiz: How safe is your online security?
Do you think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get personalized analysis of what you’re doing right and what needs improvement. Take my quiz here: Cyberguy.com.
Kurt’s Key Takeaways
AI can improve efficiency, expand access, and support clinicians. Yet its value depends on trust. Disclosure does not slow down innovation. This builds trust in the technology and the professionals who use it. As states continue to act, transparency will likely become the norm rather than the exception when it comes to AI in healthcare.
If AI helps guide your care, would knowing when and how it is used change the way you trust your healthcare professional? Let us know by writing to us at Cyberguy.com.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive offers straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM bulletin.
Copyright 2025 CyberGuy.com. All rights reserved.
