Tuesday, November 28 | 9:30 a.m.-9:40 a.m. | T3-SSBR05-1 | Room S406B
In this talk, researchers will reveal data showing that multimodal AI can yield better results than single-modality AI software in women screened with both mammography and ultrasound.
Laura Heacock, MD, from New York University, will present her team’s findings, which found that an AI model trained on full-field digital mammography, digital breast tomosynthesis (DBT), and ultrasound demonstrated high performance in breast cancer screening.
AI continues to show its utility in clinical settings, and previous studies suggest that AI trained in multiple imaging methods can further improve breast cancer screening, according to the researchers. The Heacock team wanted to find out whether adding ultrasound to mammography and DBT in a multimodal AI system could further add value for detecting breast cancer. The system used full-field digital mammography, DBT, and breast ultrasound exams performed between 2010 and 2020. In total, this consisted of 1,964,416 exams in 324,978 patients.
The researchers found that on a test set of over 12,000 cases, the system achieved an area under the receiver operator curve (AUROC) value of 0.907 for multimodal screening exams. This included detecting 74.2% of breast cancers.
They also found that multimodal AI outperformed models that used just mammography or DBT (AUROC = 0.849) and a model that used ultrasound alone (AUROC = 0.751).
The team also reported that the multimodal model was successful in imaging women with dense breasts, producing an AUROC of 0.888. For nondense breasts, the AUROC was 0.936.
The research demonstrated that adding ultrasound to multimodal-trained AI increases accuracy in breast cancer screening in both dense and nondense breasts, according to the researchers. Attend this session to see what else the AI system can accomplish.