Patients may not take advice from AI doctors who know their names | healthcare technology | Scoop.it

As the use of artificial intelligence (AI) in health applications grows, health providers are looking for ways to improve patients' experience with their machine doctors.

 

Researchers from Penn State and University of California, Santa Barbara (UCSB) found that people may be less likely to take health advice from an AI doctor when the robot knows their name and medical history. On the other hand, patients want to be on a first-name basis with their human doctors.

 

When the AI doctor used the first name of the patients and referred to their medical history in the conversation, study participants were more likely to consider an AI health chatbot intrusive and also less likely to heed the AI's medical advice, the researchers added. However, they expected human doctors to differentiate them from other patients and were less likely to comply when a human doctor failed to remember their information.

 

The findings offer further evidence that machines walk a fine line in serving as doctors.

 

Machines do have advantages as medical providers, said Joseph B. Walther, distinguished professor in communication and the Mark and Susan Bertelsen Presidential Chair in Technology and Society at UCSB. He said that, like a family doctor who has treated a patient for a long time, computer systems could — hypothetically — know a patient’s complete medical history. In comparison, seeing a new doctor or a specialist who knows only your latest lab tests might be a more common experience, said Walther, who is also director of the Center for Information Technology and Society at UCSB.

 

“This struck us with the question: ‘Who really knows us better: a machine that can store all this information, or a human who has never met us before or hasn’t developed a relationship with us, and what do we value in a relationship with a medical expert?’” said Walther. “So this research asks, who knows us better — and who do we like more?”

 

Accepting AI doctors

As medical providers look for cost-effective ways to provide better care, AI medical services may provide one alternative. However, AI doctors must provide care and advice that patients are willing to accept, according to Cheng Chen, doctoral student in mass communications at Penn State.

 

“One of the reasons we conducted this study was that we read in the literature a lot of accounts of how people are reluctant to accept AI as a doctor,” said Chen. “They just don’t feel comfortable with the technology and they don’t feel that the AI recognizes their uniqueness as a patient. So, we thought that because machines can retain so much information about a person, they can provide individuation, and solve this uniqueness problem.”

 

The findings suggest that this strategy can backfire. “When an AI system recognizes a person’s uniqueness, it comes across as intrusive, echoing larger concerns with AI in society,” said Sundar.

 

In the future, the researchers expect more investigations into the roles that authenticity and the ability for machines to engage in back-and-forth questions may play in developing better rapport with patients.

 

read more at https://news.psu.edu/story/657391/2021/05/10/research/patients-may-not-take-advice-ai-doctors-who-know-their-names