Preparing healthcare staff to responsibly interact with AI tools without relying too much on automation or compromising human oversight will require awareness training akin to phishing exercises, said Skip Sorrels, field CTO and CISO at security company Claroty.
“There’s an essence of a tabletop exercise. I can imagine a phishing simulation, but in the form of an AI,” Sorrels said.
For example, “they go through a quiz situation, they use AI and it basically teaches them that this is not something that should go into AI, it’s an open platform, it’s not secure, it’s not secure within the walls of the university or the hospital,” he said.
These types of scenario-based training will be the most meaningful “because AI means a lot of different things to different people, and so I think it needs to be situation-based,” he said.
Preparing and training healthcare staff to use AI responsibly and securely is just one of many important facets of what is needed for AI governance in healthcare, he said.
In this audio interview with Information Security Media Group (see audio link below photo), Sorrels also discussed:
- Other critical facets of AI governance in healthcare;
- How AI governance in healthcare must keep pace with the increasingly sophisticated use of AI by threat actors;
- Public and private sector collaboration to create a robust framework for AI-enabled healthcare;
- AI developments in healthcare to watch out for in the coming year.
Sorrels has over 25 years of experience as a cybersecurity leader in building and scaling robust security programs, particularly in the healthcare industry. He began his career as a nurse in Texas before moving into technology, contributing to cybersecurity architecture and solutions for the U.S. Department of Defense at Dell. Prior to joining Claroty, Sorrels was Director of Cybersecurity at Ascension Healthcare, where he built and led a medical device security program that evolved into broader OT and XIoT cybersecurity initiatives.
