Close Menu
clearpathinsight.org
  • AI Studies
  • AI in Biz
  • AI in Tech
  • AI in Health
  • Supply AI
    • Smart Chain
    • Track AI
    • Chain Risk
  • More
    • AI Logistics
    • AI Updates
    • AI Startups

Amazon launches AI healthcare tool for One Medical members

January 23, 2026

Workday CEO calls AI software sales narrative ‘exaggerated’

January 23, 2026

AI in the exam room: combining technology and human contact

January 23, 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
clearpathinsight.org
Subscribe
  • AI Studies
  • AI in Biz
  • AI in Tech
  • AI in Health
  • Supply AI
    • Smart Chain
    • Track AI
    • Chain Risk
  • More
    • AI Logistics
    • AI Updates
    • AI Startups
clearpathinsight.org
Home»AI in Healthcare»Deciding how AI should be used in mental health care
AI in Healthcare

Deciding how AI should be used in mental health care

January 22, 2026006 Mins Read
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email Telegram WhatsApp
Follow Us
Google News Flipboard
Last briana pride summit 225x300.jpg
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

As the use of artificial intelligence tools in healthcare increases at a dramatic rate, the debate over their use has focused on efficiency and innovation. But in a new article published in Natural mental healthStony Brook University Briana Last argues that a more pressing question has been largely overlooked: who decides how AI is used in mental health care, and in whose interests are these decisions made?

Finally, an assistant professor at Department of Psychologyco-author of the article, “Empowering service users, the public and providers to determine the future of artificial intelligence in behavioral healthcare“, with Gabriela Kattan Khazanov, assistant professor at the Ferkauf Graduate School of Psychology at Yeshiva University. Citing research in psychology, public health, and technology ethics, Last and Khazanov argue that AI cannot solve the mental health crisis without addressing deeper structural inequalities, and that those most impacted by the system must have a seat at the table.

Briana's Final Pride Summit
Briana Last. Photo by Luis Pedro Castillo Images for PRIDEnet.

“Much of my research examines how the U.S. mental health system currently fails to meet the needs of most of the people who depend on it,” Last said. “The people who need care most are often the least likely to receive it, and clinicians who provide care to underserved communities are often underpaid and overworked. »

This imbalance shaped his interest in patient and clinician perspectives. “From my knowledge and that of others, the message is pretty consistent,” Last said. “People want care to be more accessible, more affordable, and to favor agency. They want more autonomy and decision-making power over how care is distributed and delivered, with most wanting more human connection, no less.

His concerns grew as generative AI tools, particularly mental health chatbots, began to attract public attention. “Tech CEOs and even some researchers have started telling the public that these chatbots will help solve the mental health crisis,” she said. “While this may be an effective selling point for investors, it is both unlikely and inconsistent with what most people who seek and provide treatment want or need.”

In the article, Last and Khazanov point out that the development of AI reflects human choices and power dynamics, particularly the influence of private companies.

“There is a tendency to think that the proliferation of AI in mental health care is inevitable, as if current AI use cases are just the natural course of history,” Last said. “This type of technological determinism fails to recognize that behind these decisions are human beings and powerful private interests. »

While the private sector is driving much of the recent innovation, Last says the public has already invested heavily in AI, research and IT infrastructure. “The public has spent decades funding the development of AI, and is currently paying much of its costs,” she said. “They should have a say in how these technologies are used. »

The article warns that when AI tools are designed primarily to reduce costs or increase profits, they risk worsening inequities in care. “I worry that tech companies, employers and insurers are using these technologies to cut back on human-delivered mental health care and clinical training in the name of cutting costs,” Last said. “We’re already seeing this happening.”

In this scenario, she added, access to human clinicians could become increasingly stratified. “Mental health care delivered by humans could become a luxury good,” she said.

Rather than rejecting AI outright, the authors argue for a different model of development and governance. One of their key recommendations is for increased public investment and public ownership of AI technologies used in mental health care.

Last expressed skepticism that regulation alone could keep pace with rapid technological change. “While regulation is necessary, I don’t think it’s sufficient,” she said. “Public investment and ownership can redirect technology investments to prioritize care for those most in need – care that may not always be cost-effective, but will always be essential. »

The article also calls for participatory research methods that actively involve service users, clinicians and communities throughout the AI ​​development process. This involvement, Last said, must go beyond superficial consultation.

“People who will be using these technologies regularly should have a place at every stage of the research process, from idea generation to implementation,” she said. “Right now, there is a huge disconnect between what most people think and feel about AI and how AI is actually deployed in mental health care. »

Providers have expressed concerns about how AI could affect training, supervision and patient relationships. “People are very concerned about how AI is and will be used in mental health care,” Last said. “When it comes to chatbots, there are real questions about their safety and effectiveness, particularly for vulnerable people. »

Last believes that public universities like Stony Brook have a critical role to play in determining a more ethical path forward and ensuring that technologies serve the public’s interests. “They exist to produce knowledge that benefits society, not just private investors. »

She said that mission is already evident at Stony Brook. “Every time I open the SBU newsletter, I am amazed by the innovative work happening here,” Last said. “This is a testament to what public research can achieve.”

Although the article presents policy recommendations, Last cautions against looking for quick fixes. “Our mental health system is rife with many problems, and I don’t think they can be solved by a few top-down policies or technological innovations,” she said. Instead, she advocates for what she calls a more democratic approach to technology.

“The public already pays a lot of the costs of AI technologies,” she said. “We need to start feeling empowered to have a say in how AI is developed and deployed. »

“It’s easy to think that the current ways AI is used in mental health care are unavoidable,” Last said. “But if we remember that it is humans, not the technologies themselves, who make the decisions about how AI is designed and deployed, we can begin to reimagine how AI could actually promote the public’s mental health and wellbeing. Service users, the public and providers deserve a real voice. The future of mental health care should be shaped by the people it is intended to serve.”

-Beth Squire

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link

Related Posts

Amazon launches AI healthcare tool for One Medical members

January 23, 2026

Amazon One Medical Introduces Health AI Agent Assistant for Simpler, Personalized, More Actionable Healthcare

January 23, 2026

Greenway Health® Launches Agentic AI Factory to Redefine the Future of Healthcare Technology

January 22, 2026
Add A Comment
Leave A Reply Cancel Reply

Categories
  • AI Applications & Case Studies (54)
  • AI in Business (277)
  • AI in Healthcare (249)
  • AI in Technology (263)
  • AI Logistics (47)
  • AI Research Updates (104)
  • AI Startups & Investments (223)
  • Chain Risk (69)
  • Smart Chain (91)
  • Supply AI (73)
  • Track AI (57)

Amazon launches AI healthcare tool for One Medical members

January 23, 2026

Workday CEO calls AI software sales narrative ‘exaggerated’

January 23, 2026

AI in the exam room: combining technology and human contact

January 23, 2026

ShopSight Closes the Retail Certainty Gap with Shopper Co-Creation and Agentic AI Demand Forecasting

January 23, 2026

Subscribe to Updates

Get the latest news from clearpathinsight.

Topics
  • AI Applications & Case Studies (54)
  • AI in Business (277)
  • AI in Healthcare (249)
  • AI in Technology (263)
  • AI Logistics (47)
  • AI Research Updates (104)
  • AI Startups & Investments (223)
  • Chain Risk (69)
  • Smart Chain (91)
  • Supply AI (73)
  • Track AI (57)
Join us

Subscribe to Updates

Get the latest news from clearpathinsight.

We are social
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Reddit
  • Telegram
  • WhatsApp
Facebook X (Twitter) Instagram Pinterest
© 2026 Designed by clearpathinsight

Type above and press Enter to search. Press Esc to cancel.