Unlike many image analysis applications based on deep learning, radiologists routinely observe multiple findings during image interpretation, often because the findings are correlated, according to presenter Xiaosong Wang, PhD, a former postdoctoral fellow at the agency. It's still a challenging task, though, to develop a computer-aided detection (CAD) framework that's capable of seamlessly detecting multiple disease types, he said.
"However, such a framework is a crucial part to build an automatic radiological diagnosis and reporting system," Wang said.
Toward this end, the researchers from the laboratory of Dr. Ronald Summers, PhD, at the NIH Clinical Center developed a new framework for classifying multiple thorax diseases on chest radiographs by using both image features and text extracted from associated reports. They then converted the framework into a chest x-ray reporting system.
"The system takes a single chest x-ray as input and simulates the real-world reporting process by outputting disease classification and generating a preliminary report spontaneously," he told AuntMinnie.com. "The text embeddings learned from the retrospective reports are integrated into the model as a priori knowledge and the joint learning framework boosts the performance in both tasks in comparison to the previous state of the art."
The result was a significant case study for utilizing retrospective data -- i.e., radiological images and reports -- to build a multidisease classification and reporting framework, Wang said.
"While significant improvements have been achieved in the multidisease classification problem, there is still much space to improve the quality of generated reports," he said. "We hope it can attract the attention of researchers in the community so as to march forward in this challenging task."
Take in this Sunday presentation to learn more.