AI on par with rads in evaluating DBT exams

Allegretto Amerigo Headshot

Thursday, December 4 | 8:20 a.m.-8:30 a.m. | R1-SSBR10-3 | Room S406A

Attendees in this session will learn about how a commercial AI model can analyze digital breast tomosynthesis (DBT) images as a standalone reader.

In his talk, Yan Chen, PhD, from the University of Nottingham in England, will present results from a study he and colleagues led, showing that the model performed similarly to breast imaging readers.

The study included 75 combined DBT and synthetic 2D mammography screening cases. The researchers placed these cases into a Personal Performance in Mammographic Screening (PERFORMS) external quality assurance test set. They had 108 readers from seven U.K. National Health Service (NHS) hospitals and one U.S.-based institution, all of whom use DBT. The readers had a median of 12 years of experience in breast imaging and six years of experience with DBT. The AI model, meanwhile, analyzed the same test set.

The AI model achieved an area under the receiver operating characteristic curve (AUC) of 0.935, 97.4% sensitivity, and 71.4% specificity. The readers achieved an AUC of 0.933, 92.1% median sensitivity, and 88.4% specificity.

When the U.S.-based readers analyzed images with no assistance, the mean AUC was 0.947, the mean sensitivity was 96.1% and the specificity was 75.9%. This improved with AI assistance, including an AUC of 0.974, 98.7% sensitivity, and 79.9% specificity. Finally, the team reported an 18% reduction in average per-case read time with AI assistance.

Attend this session to learn more about what this could mean for DBT reading.

Page 1 of 1