Close Menu
clearpathinsight.org
  • AI Studies
  • AI in Biz
  • AI in Tech
  • AI in Health
  • Supply AI
    • Smart Chain
    • Track AI
    • Chain Risk
  • More
    • AI Logistics
    • AI Updates
    • AI Startups

Deloitte finds companies adopting AI without increasing revenue • The Register

January 24, 2026

Enhancing luxury travel experiences through technology

January 24, 2026

Predictions 2026: Evolving data centers for an AI-driven future – IT News Africa

January 24, 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
clearpathinsight.org
Subscribe
  • AI Studies
  • AI in Biz
  • AI in Tech
  • AI in Health
  • Supply AI
    • Smart Chain
    • Track AI
    • Chain Risk
  • More
    • AI Logistics
    • AI Updates
    • AI Startups
clearpathinsight.org
Home»AI in Healthcare»Federal AI policy threatens prior authorization reform
AI in Healthcare

Federal AI policy threatens prior authorization reform

December 24, 2025005 Mins Read
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email Telegram WhatsApp
Follow Us
Google News Flipboard
Ai healthcare.jpg
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

Automated systems play an increasingly important role in access to health services. Insurers frequently use algorithms and artificial intelligence (AI) to route applications and make coverage decisions, complete records and forms, and even make recommendations on medical necessity. While automation can improve speed, relying too much on AI systems risks inappropriate refusals, biased decision-making, and lack of individualized clinical examination. Ultimately, computers and algorithms too often replace, but do not complement, clinicians’ judgments and recommendations about the care you need.

A recent national survey of health insurers found that most are already using automated AI systems for prior authorization (PA) requests. On the individual and collective markets, around 3 out of 4 plans report that they use AI for PA approvals, which can help reduce delays. However, a smaller but notable share (around 8-12%) uses AI to justify PA denials. These automated refusals endanger patients’ access to care.

The recent decision of the Administration Executive Order on AI“Ensuring a National Policy Framework for Artificial Intelligence” aims to limit the ability of states to adopt and enforce their own AI safeguards. Earlier this year, Congress rejected the bill proposals to limit state regulatory authority over AI. The EO directs the Department of Justice to identify and challenge state laws that it considers to be in conflict with as-yet-undetermined federal AI policy. It also encourages the DOJ to target state laws considered “onerous or excessive.” Although not specific to health policy, the EO squarely targets state legislation that regulates AI systems and automated decision-making in health care. Although the threat of DOJ involvement constitutes real pressure, EOs cannot preempt state laws in this way.

In the absence of an enforceable federal AI regulatory framework, states are increasingly filling this gap by enacting their own AI legislation. Two common types include AI-specific laws covering high-risk uses of AI and laws that limit how prior authorization decisions are made. Domestic legislation specific to AI often creates new protections against discrimination (one of the goals the EO sets for DOJ). Others clarify obligations and provide for enforcement of existing laws, such as setting transparency obligations and confirming that consumer protections apply to the use of AI in high-risk spaces (often defined to include healthcare and health insurance). For example, Coloradothe landmark Consumer protections in interactions with Artificial Intelligence Systems Act applies to AI used in health care decisions, including utilization decisions.* It ensures protection against bias, requires plans to disclose important data and methodologies, and guarantees an individual’s right to appeal an AI-generated health care decision.

PA-specific legislation often requires clinician review of automated decisions, prohibits fully automated denials, and/or mandates public reporting on approval and denial models and processes. Examples include Texaswhich passed a law in 2025 prohibiting utilization review officers from using an automated decision system to issue an adverse decision without human oversight. Arizona And Maryland passed similar laws prohibiting the use of AI as the sole basis for a denial of medical necessity.

The EO threatens to weaken enforcement of these protections and push states toward reforms that are less meaningful and easier for payers to circumvent. When states fail to advocate for patients, automation can harm people who already lack protection from opaque algorithmic systems, hard-to-challenge denials, and processes protected as proprietary trade secrets. This is not a red state or blue state issue. The administration should let states take steps to protect consumers from the harms of AI.

The Trump administration is also pushing to expand the use of AI in health care. The Centers for Medicare & Medicaid Services (CMS), through its Innovation Center, launched WISER (Wasteful and Inappropriate Service Reduction), a pilot program tested in 6 states to apply AI in prior authorization to select items and services under traditional Medicare. HHS hopes to set a precedent with this program, scheduled to launch Jan. 1, and expand AI to more HHS programs. HHS recently released a revised version AI Strategywhich Secretary Kennedy suggested as the “model for the use of AI” in the federal government, and indicates HHS’s commitment to being “all in” on AI. This week, HHS also released a request for information “seek broad public input on how HHS can accelerate the adoption and use of artificial intelligence in clinical care for all Americans.” » Once AI-based prior authorization is standardized in these spaces and providers can designate it as the “new federally accepted standard,” it will be much more difficult for states to regulate.

Supplier groups, including American Medical Associationstrongly criticized WISeR. The House and Senate presented accompanying invoices to stop him from moving forward. While it is encouraging that Congress is highlighting the risks of AI in PA, the bill is unlikely to gain traction this Congress.

As we enter the new year, the conflict over AI-based PA (as well as other healthcare utilization management) will continue to play out at the federal and state levels. Several developments are worth monitoring:

  1. Details on the implementation of WISeR and any emerging trends in denials or appeals will be of critical importance as stakeholders assess its impact.
  2. If the SMARTER Care Act gains traction in Congress, it will indicate how seriously lawmakers are taking concerns about AI in healthcare.
  3. States may consider removing AI guardrails that apply to health care insurers (in general) and prior authorizations, for fear of preemption or legal challenges.
  4. States that have already taken steps to regulate AI and/or PA can serve as test cases to determine whether protections can be developed to survive federal challenges under the EO.

Stay tuned NHeLP Pre-Authorization Series in 2026 for more updates.

*A special legislative session in Colorado pushed back implemented within six months until June 2026.

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link

Related Posts

AI “patients” used to help train medical students

January 24, 2026

Why Yann LeCun’s Advanced Machine Intelligence startup is targeting health

January 23, 2026

Amazon launches AI healthcare tool for One Medical members

January 23, 2026
Add A Comment
Leave A Reply Cancel Reply

Categories
  • AI Applications & Case Studies (55)
  • AI in Business (280)
  • AI in Healthcare (251)
  • AI in Technology (266)
  • AI Logistics (47)
  • AI Research Updates (105)
  • AI Startups & Investments (226)
  • Chain Risk (70)
  • Smart Chain (91)
  • Supply AI (74)
  • Track AI (57)

Deloitte finds companies adopting AI without increasing revenue • The Register

January 24, 2026

Enhancing luxury travel experiences through technology

January 24, 2026

Predictions 2026: Evolving data centers for an AI-driven future – IT News Africa

January 24, 2026

Maryland Graduate AI Tool Teaches Case Study Answers

January 24, 2026

Subscribe to Updates

Get the latest news from clearpathinsight.

Topics
  • AI Applications & Case Studies (55)
  • AI in Business (280)
  • AI in Healthcare (251)
  • AI in Technology (266)
  • AI Logistics (47)
  • AI Research Updates (105)
  • AI Startups & Investments (226)
  • Chain Risk (70)
  • Smart Chain (91)
  • Supply AI (74)
  • Track AI (57)
Join us

Subscribe to Updates

Get the latest news from clearpathinsight.

We are social
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Reddit
  • Telegram
  • WhatsApp
Facebook X (Twitter) Instagram Pinterest
© 2026 Designed by clearpathinsight

Type above and press Enter to search. Press Esc to cancel.