In recent years, deep learning and convolutional neural networks (CNNs) have performed extremely well in general computer vision tasks, leading to expectations of promising performance for medical interpretation tasks such as CADx of breast tumors, according to lead author Benjamin Huynh of the University of Chicago.
"Indeed, in previous research, we found that our conventional CADx methods and our deep-learning methods performed well using mammographic and ultrasound data in distinguishing between cancer and noncancer," he said. "Most exciting, though, was that when we combined the two approaches, we achieved a statistically significant improvement in diagnostic performance."
CNNs differ from conventional computer-assisted diagnosis methods in that they "learn" unintuitive features directly from medical images, instead of explicitly measuring lesion properties such as size, shape, or morphology, Huynh said. Because the two methods seem so different from each other, the researchers sought to investigate how CNNs distinguish between cancer and noncancer, as well as develop methods to further incorporate CNNs with conventional CADx methods, he said.
"In general, our results suggest that there are unintuitive diagnostic properties in the surrounding texture of breast images that CNNs capture, in comparison to conventional CADx methods, which primarily use information from the lesion alone," Huynh said. "This knowledge led us to preprocess our medical images to artificially include more of the surrounding texture, resulting in significant gains in diagnostic performance. By exploiting the complementary nature of CNN-based methods and conventional CADx methods, we were also able to further improve an integrated diagnostic system incorporating both methods."
Check out this Monday morning session for all the details.