Close Menu
clearpathinsight.org
  • AI Studies
  • AI in Biz
  • AI in Tech
  • AI in Health
  • Supply AI
    • Smart Chain
    • Track AI
    • Chain Risk
  • More
    • AI Logistics
    • AI Updates
    • AI Startups

Airport Logistics System Market Report 2026: $11.56 Billion

January 24, 2026

AI startup Humans& raises $480 million at a valuation of $4.5 billion in funding round

January 24, 2026

Without patient engagement, AI for healthcare is fundamentally flawed

January 24, 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
clearpathinsight.org
Subscribe
  • AI Studies
  • AI in Biz
  • AI in Tech
  • AI in Health
  • Supply AI
    • Smart Chain
    • Track AI
    • Chain Risk
  • More
    • AI Logistics
    • AI Updates
    • AI Startups
clearpathinsight.org
Home»AI in Healthcare»Without patient engagement, AI for healthcare is fundamentally flawed
AI in Healthcare

Without patient engagement, AI for healthcare is fundamentally flawed

January 24, 2026006 Mins Read
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email Telegram WhatsApp
Follow Us
Google News Flipboard
740456197 ai artificial intelligence robot screen workflow.jpg
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

The following is a guest post from Nabila El-Bassel, Ph.D. DSW, founding director of Social intervention group at Columbia University School of Social Work

What if a doctor thought his patient was healthy, just because he rarely came to the clinic?

Researchers discovered serious defects in an artificial intelligence (AI) tool used by a UnitedHealthcare unit, which consistently classified black patients as healthier than white patients with the same conditions – not because they were healthier, but because they incurred lower healthcare costs. He failed to acknowledge that the decline in spending was due to barriers to accessing health care.

This is not an isolated discovery; This is a warning about a larger problem affecting AI in healthcare. If not designed from the outset with meaningful patient and community input, AI will risk excluding the most vulnerable and reproducing existing biases like those discussed above.

The Urgent Need for AI in Healthcare

Never has AI been so necessary in healthcare Medicaid and other health programs are being cutjeopardizing the health coverage of more than 10 million of the most vulnerable Americans. Recently, the Trump administration unveiled an action plan for AI, but this one failed to include, according to the Brookings Institute “mechanisms such as co-creation (and) participatory design…” to “serve citizens and humanity in a fair, transparent and accountable manner”.

I have spent more than three decades designing and testing global public health interventions and conducting research largely funded by the National Institute of Health. My expertise is working in close partnership with communities, including people with lived experience during analysis, design, implementation, publication and presentation of intervention results.

Why this lack of patient participation?

When I see how AI is developing without patient input, I am concerned. Unfortunately, when it comes to AI, those most affected are rarely invited to help shape the technologies that will decide their future. A Review of the 2024 framework of 10,880 articles describing applications of AI or machine learning in healthcare, less than 0.2% included some form of community engagement. More than 99% of so-called health “innovations” have been created without consulting the people most affected by them.

In contrast, traditional health technologies like medical devices often involve patients undergoing treatment almost half the time. Devices like this insulin pump And heart monitor must undergo rigors FDA review, including clinical validation, user testsAnd post-market surveillance. The pace of AI may have outpaced regulation, but that’s no excuse. Rather, its scale and scope require further examination.

My colleagues and I developed a participatory public health research plan. In our project to reduce overdose deathswe developed community communications campaigns that could address unique, location-specific factors. Without communities as true partners from the start, AI risks reproducing or even worsening inequalities.

Woebot, a therapeutic robot, was launched in 2017 to improve mental health through chat. Woebot was designed by clinical psychologists but excluded members of the community. Although a Study 2021 After sharing Woebot’s promising results after eight weeks, the majority of users were white women employed full-time, missing key demographics: the underemployed or unemployed and non-whites facing structural barriers to care. This exclusion is particularly harmful when AI is deployed in contexts already marked by deep health inequalities. Because Woebot, like many chatbots, was trained on largely uniform data, its lack of cultural, racial, and socioeconomic nuance means that it often misinterprets or ignores how distress is expressed across different walks of life.

In the treatment of addictions, AI systems can report missed appointments as non-complianceswithout recognizing barriers such as caregiving responsibilities or lack of transportation. Furthermore, due to a lack of existing data and poor data due to disparitiesa bias can flourishing in each new algorithm, impacting healthcare, social services, and addiction treatment programs.

Barriers to input inclusion

Researchers and designers may be hesitant to include community input in the design process for two main reasons. First, AI designers may not know how to present their ideas to the community. In fact, a researcher recently asked me about best practices for paying participants (a must) and where to find these community co-designers. Where to look depends on the question of focus.

Second, AI designers may fear possible ethical and privacy issues related to customer or participant data. The public may also be wary of their own participation due to similar privacy concerns. Fortunately, frameworks ensuring these protections in AI already exist and are improving over time.

Best practices for including comments

Included in the publication are principles intended to help AI researchers ensure that community-defined goals, values, and needs are met. A participatory approach to AI for social good. Additionally, a model I developed for providers and researchers advocates ethical community engagement at every phase of AI design. The model includes targeted questions to help protect data privacy, include community voices, and align AI tools with community expectations. Researchers, designers, and potential consumers can use both frameworks to ensure fair, efficient, cost-effective, and safe AI design.

To further ensure that AI is deployed for social good, academic institutions should support initiatives providing modeling of these efforts. At Columbia University, we launched the Artificial Intelligence for Social Good and Society Initiative to train a new generation of AI researchers in public health, social work, and data science. The research will be accessible to other universities and anyone interested in equitable AI. Open calls for collaboration will also include faculty outside of Columbia.

Certainly, researchers and scientists face challenges following the reduction or elimination of funding to academic institutions. In response to the aforementioned AI Action Plan, the Brookings Institution also highlighted the need to fund research and development in higher education institutions in order to maintain a competitive advantage and ensure continued innovations for the public good. By reducing federal research funding, researchers will increasingly have to turn to industry grants for funding, which could potentially encourage commercial rather than public interests.

If AI in healthcare and mental health is to live up to its potential, we must reject token engagement and embrace co-design and participatory research at every stage of development. From data collection and algorithm training to deployment and evaluation, lived experience is not “fun” but essential to building equitable, efficient, and reliable systems.

About Nabila El-Bassel

Dr. Nabila El-Bassel is a university professor and the Willma and Albert Musher Professor of Social Work at Columbia University. She is an internationally recognized intervention scientist whose work spans HIV/AIDS prevention, substance use and dependence, gender-based violence, and health inequities affecting marginalized communities. She is the founding director of the Social Intervention Group, a leading interdisciplinary research center established in 1990 that develops and tests evidence-based interventions for HIV, substance use, and violence. More recently, she launched the Artificial Intelligence for Social Good and Society Initiative and developed and published a model for providers and researchers advocate ethical community engagement at every phase of AI design.

Receive new health and IT stories daily

Join thousands of your healthcare and HealthIT peers who subscribe to our daily newsletter.

We respect your privacy and will never sell or give away your contact details

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link

Related Posts

AI “patients” used to help train medical students

January 24, 2026

Why Yann LeCun’s Advanced Machine Intelligence startup is targeting health

January 23, 2026

Amazon launches AI healthcare tool for One Medical members

January 23, 2026
Add A Comment
Leave A Reply Cancel Reply

Categories
  • AI Applications & Case Studies (55)
  • AI in Business (281)
  • AI in Healthcare (252)
  • AI in Technology (267)
  • AI Logistics (47)
  • AI Research Updates (105)
  • AI Startups & Investments (227)
  • Chain Risk (70)
  • Smart Chain (92)
  • Supply AI (74)
  • Track AI (57)

Airport Logistics System Market Report 2026: $11.56 Billion

January 24, 2026

AI startup Humans& raises $480 million at a valuation of $4.5 billion in funding round

January 24, 2026

Without patient engagement, AI for healthcare is fundamentally flawed

January 24, 2026

How are West Midlands businesses adopting AI?

January 24, 2026

Subscribe to Updates

Get the latest news from clearpathinsight.

Topics
  • AI Applications & Case Studies (55)
  • AI in Business (281)
  • AI in Healthcare (252)
  • AI in Technology (267)
  • AI Logistics (47)
  • AI Research Updates (105)
  • AI Startups & Investments (227)
  • Chain Risk (70)
  • Smart Chain (92)
  • Supply AI (74)
  • Track AI (57)
Join us

Subscribe to Updates

Get the latest news from clearpathinsight.

We are social
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Reddit
  • Telegram
  • WhatsApp
Facebook X (Twitter) Instagram Pinterest
© 2026 Designed by clearpathinsight

Type above and press Enter to search. Press Esc to cancel.