Information From Patient-Facing AI Should Be Taken With ‘a Grain of Salt’

News
Article

Although patient-facing AI could be more convenient in some settings, researchers delved into the ethical challenges and threats to patient dignity.

AI image

Patient-facing artificial intelligence (AI) has been used more in recent years; however, it may cause some ethical challenges and threats to a patient’s dignity, according to a study.

Specifically, a study from JCO Oncology Practice focused on technology such as telehealth (virtual communication between a patient and a medical professional), remote monitoring (digital health tools including wearable sensors to track health data) and health coaching (virtual and AI-assisted support and guidance for both patients and caregivers).

The researchers determined that although the use of AI may eventually become beneficial in a clinical setting, it’s not perfect yet. Many ethical challenges remain, including enhanced patient engagement, improved adherence to treatment and notably, proper personalized care.

“In general, at baseline, there’s not a huge amount of personalization. These are models that are trained off of large amounts of data, (such) that even if they’re trained off of healthcare-related data, they are going to skew towards the needs of large populations,” Dr. Amar Kelkar, lead author of the study, said in an interview with CURE®.

Kelkar is also a stem cell transplantation physician at the Dana-Farber Cancer Institute and an instructor in medicine at Harvard Medical School in Boston.

“Even if you were to train them on empathetic speech, which is, I think, one way that might allow this to become more personalized, the concern is that the speech may either be generic or not be directly interacting with the words that the patients are saying because they're the way that — at least right now — largely single language models work.

“These are just models that will go based on a kind of probabilistic word finding. And so, when they were to communicate with patients, yes, the speech might sound logical, it might sound empathetic to a generic reader, but to a patient, that may not directly feel that way if the words just sound like they’re kind of going through the motions of what empathy might sound like,” Kelkar said.

For the study, Kelkar and his co-researchers analyzed what types of technology and how much of it could be acceptable in a medical setting, which emphasized the ethical challenges.

“As we’ve just started to soften the edges of what is defined as high-quality patient care, the definition is brought in, but does that mean it’s a slippery slope to just saying that no human is needed and we can just have all our medical care done by very highly, highly trained software?” Kelkar said.

It all comes back to empathy, he noted, in which humans caring for patients with cancer demonstrate being treated with respect.

“Human dignity and feeling like you’re being cared for by another person feels like someone is empathizing with you, someone is treating you with respect to the way they would want themselves to be treated, or their family members be treated,” Kelkar added. “That’s really why human-based care or human-supervised care, or some component of that, needs to be involved.”

Even the style of communication is altered when AI is more involved than in-person interaction between patients and medical professionals, Kelkar said.

“(For in-person care,) there’s a lot of interaction between patients and their physicians or the medical team that allows for body language and all those other components that are lost when you switch to a virtual medium,” he explained. “We saw this in the Zoom era. And certainly, with telehealth, that's something that sometimes can get lost. Even if you’re just speaking one-on-one, a huge part of the communication style changes.”

For patients who are interacting with AI models, Kelkar urged taking in this information “with a grain of salt.”

“Remember the fact that if there are no humans supervising that AI model, there’s a risk that the data could be just either false, hallucinated or imperfect in some way that they may not be able to recognize,” he said. “I might say that, as the medical community gets more familiar with these, and there are more and more models that are trained with medical data, and maybe trained on empathetic care, I might be able to point a patient to AI models that might be more potentially safe or more targeted to their needs.”

For more news on cancer updates, research and education, don’t forget to subscribe to CURE®’s newsletters here.

Related Content