AI use cases in healthcare will continue to explode in 2026, including for back-office automation, ambient clinical documentation in exam rooms, claims processing, and clinical decision support. So will critical considerations of privacy, security, legal and other risks, said attorney Wendell Bartnick of the Reed Smith law firm.
“I really think it comes down to governance,” Bartnick said in an in-depth interview with Information Security Media Group about the growing opportunities of AI and associated risks in healthcare.
The types and degree of risks vary across different types of AI use cases in healthcare, but all entities considering deploying AI need to do their governance homework, he said.
“Until there is more regulation, I would look at the National Institute of Standards and Technology report. AI Risk Management Framework“, he said. “I think it’s a very good basis for just understanding these risks.”
In the interview (see audio link below photo), Bartnick also discussed:
- Key security, privacy, regulatory and legal risks and pitfalls involving different AI use case examples in healthcare;
- Concerns about the use of agentic AI, mental health chatbots and recordings of patient encounters;
- The dangers of patients using AI to self-diagnose and treat themselves.
Bartnick, a partner at the law firm Reed Smith, uses his IT experience to advise clients in a variety of industries, including healthcare, life sciences and biotechnology. His advice typically focuses on data rights and privacy, cybersecurity, commercialization, technology licensing – including AI and machine learning – governance, partnership strategies and agreements and other regulatory compliance advice, as well as investigations related to technology and data issues.
