Trust is low among U.S. adults when it comes to healthcare systems using AI responsibly and protecting patients from AI-related harms, show survey results published February 14 in JAMA Network Open.
In their research, Paige Nong, PhD, from the University of Minnesota in Minneapolis, and Jodyn Platt, PhD, from the University of Michigan in Ann Arbor also found that general trust in the healthcare system is tied to low trust in medical AI.
“Low trust in healthcare systems to use AI indicates a need for improved communication and investments in organizational trustworthiness,” Nong and Platt wrote.
Previous research has shown varying attitudes among patients about the use of AI in medical clinics, including in radiology. A 2022 survey reported that nearly two out of every three U.S. health consumers either trust or are neutral on the use of AI for medical imaging applications. A 2022 study found that most women who are knowledgeable about AI for breast cancer screening are optimistic about the technology’s use as an adjunct for radiologists.
Nong and Platt conducted a national survey of U.S. adults to understand whether they trust their health systems to use AI responsibly and protect them from AI harms. They also examined variables that may be tied to these attitudes.
The study included responses from 2,039 participants with an average age of 48 years, 51.2% of whom were female and 48.8% of whom were male. Participants represented the following demographics: white (n = 876), Black (n = 540), Hispanic (n = 519), Asian (n = 53), and multiracial or other (n = 51).
On a 0-to-12-point scale to measure trust (with 12 indicating highest trust), respondents’ trust in the healthcare system was middling, with an average score of 5.38.
The researchers also reported the following:
- 65.8% of respondents reported low trust in their healthcare system to use AI responsibly, while 57.7% indicated low trust that their healthcare system would make sure an AI tool would not harm them.
- In multivariable logistic regressions, respondents with higher trust were more likely to believe that their healthcare system would protect them from AI harm (odds ratio [OR], 3.97) and use AI responsibly (OR, 4.29).
- Women were less likely than men to trust their system to use AI responsibly, but there was no difference by sex in respondents’ trust that systems would protect them from AI-related harms.
The researchers also found that experiences of discrimination while seeking care were negatively tied to trust in systems using AI responsibly (OR, 0.66) and protecting patients from harm (OR, 0.57). Finally, they observed no association between health literacy or AI knowledge and trust in healthcare systems using AI.
Nong and Platt called for future studies to explore trends in such trust and include additional validated measures of factors such as patient comfort, familiarity, and experience with AI that could be associated with outcomes.
The full study can be found here.