IIn Southern California, where homelessness rates are among the highest in the country, a private company, Akido Laboratoriesoperates clinics for unhoused patients and other low-income individuals. The warning? Patients are seen by medical assistants who use artificial intelligence (AI) to listen to conversations and then issue potential diagnoses and treatment plans, which are then reviewed by a doctor. The company’s goal, its chief technology officer told the MIT Technology Review, is to “remove the doctor from the visit“.
It’s dangerous. Yet this is part of a larger trend where generative AI is being integrated into healthcare for medical professionals. In 2025, a investigation by the American Medical Association reported that two out of three doctors used AI to help them in their daily work, including diagnosing patients. A AI startup raises $200 million provide healthcare professionals with an application called “ChatGPT for Doctors”. US lawmakers are considering a Invoice this would recognize the AI as capable of prescribing medication. While this AI trend in healthcare affects almost all patients, it has a more profound impact on low-income people who already face significant barriers to care and higher rates of mistreatment in healthcare settings. Unhoused and low-income people should not serve as a testing ground for AI in healthcare. Instead, their voices and priorities should determine if, how, and when AI will be implemented in their care.
The rise of AI in healthcare has not happened in a vacuum. Overcrowded hospitals, overworked clinicians, and relentless pressure for medical practices to operate seamlessly, moving patients in and out of a vast for-profit health system, set the conditions. The demands placed on healthcare workers are often compounded in economically disadvantaged communities where healthcare facilities are often under-resourced and patients are uninsured, with a greater burden of chronic illness due to racism and poverty.
This is where someone might ask, “Isn’t something better than nothing?” » Well, actually, no. Studies show that AI-based tools generate inaccurate diagnoses. A Study 2021 in Nature Medicine examined AI algorithms trained on large chest X-ray datasets for medical imaging research and found that these algorithms systematically underdiagnose Black and Latinx patients, patients registered as female, and patients with Medicaid insurance. This systematic bias risks worsening health inequities for patients already facing barriers to care. Another studypublished in 2024, found that AI misdiagnosed breast cancer screenings in black patients – the risks of false positives for black patients screened for breast cancer were greater than for their white counterparts. Due to algorithmic bias, some AI clinical tools have notoriously performed worse on Black patients and other people of color. This is because AI does not “think” independently; it relies on probabilities and pattern recognition, which can reinforce bias against already marginalized patients.
Some patients are not even aware that their healthcare provider or healthcare system uses AI. A medical assistant told MIT Technology magazine that its patients know that an AI system is listening to them, but it does not tell them that it is making diagnostic recommendations. This reminds us of a time of exploitative medical racism where black people were experimented on without informed consent and often against their will. Can AI help healthcare providers by quickly providing them with information that can allow them to move on to the next patient? Maybe. But the problem is that this could come at the expense of diagnostic accuracy and worsen health inequalities.
And the potential impact goes beyond diagnostic accuracy. TechTonic Justicean advocacy group working to protect economically marginalized communities from the harms of AI, released a groundbreaking report that finds 92 million Americans low-income “see certain fundamental aspects of their lives decided by AI”. These decisions range from how how much they receive from Medicaid to find out if they are eligible for Social Security Administration disability insurance.
A real-life example of this is currently playing out in federal courts. In 2023, a group of Medicare Advantage customers continued UnitedHealthcare in Minnesota, alleging they were denied coverage because the company’s AI system, nH Predict, mistakenly deemed them ineligible. Some of the plaintiffs are the estates of Medicare Advantage customers; these patients would have died as a result of refusing medically necessary care. UnitedHealth sought to dismiss the case, but in 2025 a judge ruled that the plaintiffs could move forward with some of the claims. A similar case was filed in federal court in Kentucky against Humana. There, Medicare Advantage customers alleged that Humana’s use of nH Predict “spits out generic recommendations based on incomplete and inadequate medical records.” That case is also ongoing, with a judge ruling that the plaintiffs’ legal arguments were sufficient to move forward, surviving the insurance company’s motion to dismiss. While the final decision in these two cases remains up in the air, they indicate a growing trend toward using AI to decide health coverage for low-income people – and its pitfalls. If you have financial resources, you can benefit from quality health care. But if you’re unhoused or have low income, AI can prevent you from even fully accessing health care. This is medical classism.
We should not experiment with deploying AI on patients who are unhoused or have low incomes. The documented harms outweigh the unproven potential benefits promised by start-ups and other technology companies. Given the barriers faced by unhoused and low-income people, it is essential that they receive patient-centered care with a human health care provider who is attentive to their health needs and priorities. We cannot create a norm in which we rely on a healthcare system in which healthcare practitioners take a back seat while AI – run by private companies – takes the lead. An AI system that “listens” and is developed without rigorous evaluation by communities themselves disempowers patients by removing their decision-making power to determine what technologies, including AI, are implemented in their healthcare.
-
Leah Goodridge is an attorney who has worked in the field of homelessness prevention for 12 years.
-
Oni Blackstock, MD, MHS, is a physician, founder and executive director of Health Justice, and Public Voices Fellow on technology in the public interest with The OpEd Project.
