AI improves reader performance for ABUS studies

2018 08 01 23 13 1733 Artificial Intelligence Ai 400

An artificial intelligence (AI) algorithm could potentially help radiologists and other physicians improve their ability to classify lesions on automated breast ultrasound (ABUS) exams, according to research published online February 12 in Ultrasound in Medicine and Biology.

After training a convolutional neural network (CNN) to differentiate between benign and malignant lesions on ABUS images, a multinational team of researchers found that their algorithm enhanced diagnostic performance for four out of five readers -- including three radiology residents.

"Our results suggest that the proposed CNN has advantages as a second reviewer to support diagnostic decisions of human reviewers and provide opportunities to compare decisions," wrote the authors, led by Yi Wang of the University of Saskatchewan in Canada and Dr. Eun Jung Choi, PhD, of Jeonbuk National University Medical School in Jeonju, South Korea.

To help radiologists better classify breast lesions on ABUS, the researchers utilized a "multiview" CNN, which extracts and analyzes features from multiple views of a lesion image patch.

"Because the ABUS images can be visualized in transverse and coronal views, the proposed CNN provides an efficient way to extract multiview features from both views," the authors wrote.

The researchers trained and validated their modified Inception-v3 CNN using a dataset of 316 breast lesions from 263 patients. Of the lesions, 135 were malignant and 181 were benign. The mean lesion size was 13.23 mm.

On its own and without any manual image preprocessing, the algorithm yielded 88.6% sensitivity, 87.6% specificity, and an area under the curve (AUC) of 0.947 for classifying breast lesions.

The researchers then conducted an observer performance study in which five reviewers -- a specialized breast radiologist, two senior radiology residents, one first-year radiology resident, and one other physician -- interpreted the ABUS scans before and after they viewed the AI findings.

Impact of AI on classification of breast lesions on ABUS
  Accuracy before/after access to AI results Sensitivity before/after access to AI results Specificity before/after access to AI results AUC before/after access to AI results
Specialized breast radiologist 86.4%/88.3% 88.1%/93.3% 85.1%/84.5% 0.942/0.918
Senior radiology resident 1 79.7%/82.3% 86.7%/92.6% 74.6%/74.6% 0.872/0.903
Senior radiology resident 2 84.2%/90% 87.4%/92.6% 81.8%/87.3% 0.907/0.938
First-year radiology resident 81.6%/84.8% 88.1%/94.1% 76.8%/78% 0.901/0.937
Other physician 60.4%/87.6% 87.4%/91.1% 40.3%/85.1% 0.639/0.884

The improvement in AUC was statistically significant for all readers, with the exception of the specialized breast radiologist.

"The proposed CNN can be integrated into existing [computer-aided diagnosis] systems to provide effective lesion feature extraction and robust breast cancer classification," the authors wrote. "In future work, it would be worthwhile to evaluate our present work with dedicated detection algorithms for breast lesion detection and classification as an end-to-end [computer-aided detection] solution."

Page 1 of 507
Next Page