
Getty Images
When most of us walk into a doctor’s office, we are prepared to ask questions. What does this symptom mean? Do I really need this test? What are my treatment options? What we may not know is that there is often a silent, invisible guest answering these questions alongside our clinicians: Artificial intelligence.
In just the past few months, the FDA has authorized AI tools that can predict five-year breast cancer risk from routine mammograms, analyze lung sounds during virtual visits, and map the contours of organs on MRI scans, using tools like Clairity Breast or Tyto Insights. These, and many others, may already affect part of our care without our ever being informed of their presence. Increasingly, algorithms provide risk scores, recommend treatment plans, and flag which patients need priority attention.
But what’s missing is that patients often have no idea when this technology is being used on them. Additionally, there is no clear way to unsubscribe. AI in the health field brings real potential, but also real risksespecially for black communities. What does informed consent look like in an age where algorithms shape our care?
“Patients don’t see a pop-up message that says, ‘Today’s care was brought to you by this algorithm,'” says Tiffani Bright, PhD, assistant professor at Cedars-Sinai and co-director of its Center for AI Research and Education. “Algorithmic influence is just there in the background.”
This background usage can shape more than we think. Algorithms help hospitals decide who needs urgent care, what treatments are recommended and even what appointment slot you get based on your symptoms. When technology is invisible, Bright explains, patients lose something essential: their freedom to act.
“If you don’t know it exists, you have no choice,” she says. “You can’t say, ‘I don’t know who created this tool. I don’t know what data they used. I don’t even know if it works for patients like me.'” Bright believes these tools should earn patients’ trust, just like doctors do, and that trust must be earned through transparency and fairness.
For Black patients already navigating a a health system shaped by discriminationundertreatment and misdiagnosis, lack of transparency can be downright detrimental. AI systems learn patterns from existing data, including medical records, imaging and laboratory results. But historical data may already reflect decades of uneven care.
“We need to ask ourselves who is represented in the dataset and who is not,” says Bright. “Anything that uses historical data can amplify existing disparities. » She plans to apply an equity lens to her work at the Center for AI Research and Education. “We test AI for language, gender, insurance, etc. We want to make sure that patients and patient groups are not underrepresented in our records.”
Black women already face some of the most dangerous gaps in the U.S. health care system, from higher maternal mortality rates to undertreatment of pain to delayed cancer diagnoses. AI has the potential to help close these gaps, but only if equity is intentionally built into the technology and patients are informed about how it affects their care.
Professor Antony Haynes is a privacy attorney and professor at Albany Law School. He notes that AI tools detect patterns that reflect economic, racial and cultural differences, even when race is not explicitly included. For example, pulse oximeters (the small clips placed on a finger to test oxygen) are less accurate on darker skin, and temperature scanners often used in clinics may underdetect fever in black patients.
During COVID, an algorithm used by a major insurer prioritized healthier white patients over sicker black people, not because race was a factor, but because black patients historically receive less care and therefore appear “low cost.” Another tool miscategorized asthma patients, who are disproportionately black, as “low risk” based only on length of hospital stay. At the political level, these disparities remain largely ignored. “Because Black patients are not the priority population for industry or regulators, these problems often go uncorrected,” says Haynes.
So, what are patients’ rights really? Legally, this space is murky, but Haynes details a few key points to note: HIPAA, the main federal health privacy law, does not require doctors to disclose when using AI. Patients generally do not have a federal right to opt out of AI-assisted care, and providers can often use “de-identified” patient data to train the AI.
However, some states, such as California, provide residents with the right to opt-out of certain automated decisions. The caveat here is that hospitals themselves are generally exempt.
“In informed consent law, consent is generally required when your data is used for research purposes,” says Haynes. “But for routine treatments, there is no federal requirement to inform you about the use of AI.” He believes this must change, and urgently.
“As a human being, you have the right to a human decision-maker,” he says. “You have the right to know if your doctor relies on software. You have the right to request a human exemption.”
But even without transparency laws, patients still have power. Haynes encourages asking your doctor questions like these:
- Are you using AI or software to help diagnose or treat me?
- How exactly is it used?
- Do you rely on it or is it just one tool among others?
- If I prefer a human-only decision, is that possible?
“I think you always have to ask,” he says. “Ultimately, you can seek a second opinion or choose another provider.”
Bright agrees, adding that Black patients need to feel empowered to question the tools, just like you question the system. “Don’t be afraid of technology, but be informed. Ask questions. Use your voice. We want our tools to be used with our patients, not on “, she says. “This is the difference between ethical AI and everything else. You have the right to understand and the right to say no.
People can also ask lawmakers to draft federal and state legislation requiring doctors to proactively disclose when AI is used, providers to disclose exactly how their algorithms were trained, and patients to have access to information explaining what a tool does and doesn’t do. “Patients shouldn’t have to guess,” Haynes says.
Ultimately, no matter how advanced technology is, trust in healthcare always begins with a simple principle: nothing about us, without us.
