Is ChatGPT more empathetic toward patient questions than physicians?

2020 04 01 22 35 2587 Patient Man 400

ChatGPT outperforms physicians in providing high-quality, empathetic answers to patient questions, according to findings published April 28 in JAMA Internal Medicine.

A team led by Dr. John Ayers from the Qualcomm Institute at the University of California, San Diego found that healthcare professionals preferred ChatGPT's responses nearly 80% of the time and rated the chatbot's answers as higher in quality and more empathetic.

An increase in the volume of communications between clinicians and patients in recent years has led to bigger workloads and thus higher incidences of physician burnout, Ayers' group noted. Some questions asked by patients are about seeking medical advice, and answering these takes more time and skill compared with questions that can be addressed with generic responses.

ChatGPT's use in medicine has been explored in recent months, with debate persisting on its potential clinical and research applications, including in radiology. While recent research regarding ChatGPT and radiology has pointed out the large language model's potential as well as current shortcomings, the Ayers team noted that ChatGPT's ability to help address patient questions is unexplored.

To address the knowledge gap, the investigators tested the model's ability to respond to patients' healthcare questions with high-quality and empathetic answers and compared them with physicians' responses. They used a nonidentifiable database of questions from a public social media forum to randomly draw 195 exchanges from 2022 where a verified physician responded to a public question. Chatbot responses were generated by entering the original question into a fresh session.

The team found that a group of licensed healthcare professional evaluators preferred chatbot responses to physician responses in 78.6% of the 585 assessments. On a one-to-five scale measuring response quality, the evaluators rated 78.5% ChatGPT's responses of "good" or "very good" quality compared with 22.1% of physician responses. They also rated ChatGPT's responses as "empathetic" or "very empathetic" in 45.1% of responses compared with 4.6% for physician responses.

Despite the study's limitations and the "frequent overhyping of new technologies," assessing how AI assistants such as ChatGPT help with patient interactions shows promise for improving both clinician and patient outcomes – but more research is in order, according to Ayers and colleagues.

"While this cross-sectional study has demonstrated promising results in the use of artificial intelligence assistants for patient questions ... further research is necessary before any definitive conclusions can be made regarding their potential effect in clinical settings," they wrote.

Page 1 of 372
Next Page