Wednesday, November 29 | 10:20 a.m. - 10:30 a.m. | W3-SSIN06-6 | S401
Citing vulnerabilities with deep learning-based mammographic breast cancer diagnosis models, a University of Pittsburgh research team has developed and evaluated a novel technical framework that could defend against adversarial attacks on AI software.
The starting dataset for the framework consisted of 4,346 mammograms from a cohort of 1,284 women who underwent full-field digital mammography for breast cancer screening. First, the team built a diagnosis model using a VGG-16 network to classify breast cancer (366 biopsy-proven malignancies) versus normal (918 negative cases).
Degan Hao, MS, a PhD student in the University of Pittsburgh’s Intelligent Systems Program, will share the results of a study that tested the group's adversarial training strategy. For the framework, a “regularization algorithm” was developed to facilitate learning “adversarially robust features” for classification and a label-independent data augmentation to resolve the common issue of data leakage introduced by black-box data synthesis. Five-fold cross-validation was used to compare the AUC values of the proposed adversarial training versus regular training without the novel method.
Researchers evaluated two types of adversarial attacks: white-box attack (attackers know about AI model parameters), in which adversarial data were generated by the projected gradient descent method to insert adversarial noises to mammogram images; and black-box attacks (attackers have no access to AI model parameters), in which adversarial data were generated by intentionally inserting or removing tumorous tissue in mammograms.
The breast cancer diagnosis model demonstrated an AUC of 0.668 with regular training. Under white-box attack, the model’s performance degraded to an AUC of 0.415; however, the proposed framework brings the AUC to 0.673. Likewise, black-box attack downgrades the model to an AUC of 0.461 but the defense framework brings the AUC to 0.637.
While deploying medical AI into clinical informatics workflow has risks, such as the white-box and black-box adversarial attacks described for this session, an AI defense strategy could make diagnosis models more resilient and improve patient safety, according to Hao. Drop in to learn more.