While the company's deep-learning models can identify fully evident abnormalities on chest radiographs, the researchers wanted to see how these models would perform when an abnormality is in its initial stage, according to presenter Tarun Raj. They compared the performance of their algorithms against radiologists on chest x-ray studies that had a follow-up chest CT exam. Using the CT report as the ground truth, the dataset was split into three test sets depending on the length of time -- one day, three days, and 10 days -- between the chest x-ray and follow-up chest CT.
Across all three test sets, the models showed higher sensitivity than the reporting radiologists for abnormalities, with the same or a minor increase in the number of false positives, according to Raj.
"This implies that the deep-learning models trained on a large dataset of chest x-rays can pick up abnormalities that are not immediately visible on the chest x-ray but identified on a chest CT, enabling close to chest CT-level performance on a cost-effective and lower-radiation procedure," Raj told AuntMinnie.com.
What else did they find? Check out this talk to get more information.