Inconsistencies between marketing and regulatory clearance exist in over one in six radiology machine-learning and AI technologies, suggest findings published July 5 in JAMA Network Open.
Researchers led by Phoebe Clark from NYU Langone Health in New York found that most radiology AI devices adhered to clearance summaries by the U.S. Food and Drug Administration (FDA). However, some marketing summaries didn't, potentially misleading consumers.
"In a sense, some AI companies exaggerate what is approved," co-author Yindalon Aphinyanaphongs, MD, PhD from New York University told AuntMinnie.com.
Medical devices have to receive FDA clearance before being commercially sold in U.S. markets. AI-powered medical technology is no exception to this. Once cleared, the devices should be marketed accurately to inform consumers that their algorithms are safe and effective for public use.
To help address issues with emerging technologies such as AI, the FDA uses several committees. One such committee, the Medical Devices Advisory Committee, consists of 18 specialized panels that advise the commissioner on issues relating to their respective panels.
The FDA's Center for Devices and Radiological Health (CDRH) has also established advisory committees to provide independent, professional expertise and technical assistance on medical devices and electronic products that produce radiation. The researchers noted that since most FDA-cleared AI or machine-learning devices fall under the jurisdiction of the radiology and cardiovascular committees, "these committees are likely to be much more familiar with possible frameworks of devices enabled with AI or [machine-learning] capabilities ..."
Clark and colleagues wanted to find out whether medical devices that are marketed as AI- or machine learning-enabled are being appropriately cleared for such capabilities in their FDA 510(k) clearance.
The team collected data from public application summaries and corresponding marketing materials for 119 devices that use AI or machine learning software components. The researchers organized these into three categories: adherent, contentious, and discrepant. They found that while 80.6% of the devices (n = 96) were adherent, with marketing and FDA clearance summaries being consistent, 12.6% (n = 15) and 6.7% (n = 8) of the devices were considered discrepant and contentious, respectively.
Additionally, the team reported that 75 of the total devices (82.4%) were from radiological approval committees. Of these devices, they found that 62 (82.7%) were deemed to be adherent, while 10 (13.35) were discrepant and three (4%) were contentious.
Approval for 23 other devices stemmed from the cardiovascular device approval committee. Of these, the researchers found that 19 (82.6%) were adherent.
The researchers reported that the difference between the three categories in cardiovascular and radiological devices was statistically significant (p < 0.001)
While the study authors did not speculate on how these trends came about, Aphinyanaphongs said it "could be a lot of things."
"Maybe there is a disconnect between who writes the marketing and the technical people that do the FDA claims," he told AuntMinnie.com. "Maybe it is tactical from the AI company to secure funding in hopes that the funder won’t know the details. It could be any number of things."
More research in this area, including certification methods, could further peel back the curtain. However, they also noted that "any level" of discrepancy is important for consumer safety.
"The aim of this study was not to suggest developers were creating and marketing unsafe or untrustworthy devices, but to show the need for study on the topic and more uniform guidelines around marketing of software-heavy devices," the authors wrote.