AI outperforms physicians for interpreting chest x-rays

2019 03 27 20 35 9511 X Ray Chest Lungs2 400

An artificial intelligence (AI) algorithm was able to identify major thoracic diseases on chest radiographs better than even thoracic radiologists could, offering potential to improve the quality and efficiency of clinical practice, according to a study published online March 22 in JAMA Network Open.

Researchers found that their deep-learning algorithm was more accurate in classifying chest radiographs and localizing lesions than a group of 15 physicians, including thoracic radiologists. When used as a second reader, the algorithm also significantly enhanced the physicians' performance.

"A deep learning-based algorithm may help improve diagnostic accuracy in reading chest radiographs and assist in prioritizing chest radiographs, thereby increasing workflow efficacy," wrote the group led by Dr. Eui Jin Hwang from Seoul National University College of Medicine in South Korea.

Interpreting chest radiographs can be a challenging, error-prone task that requires expert readers. An automated system that could accurately classify chest radiographs could help streamline the clinical workflow, according to the researchers.

As a result, they sought to develop an automated algorithm that could classify abnormal and normal chest radiographs for major thoracic diseases such as pulmonary malignant neoplasm, active tuberculosis, pneumonia, and pneumothorax.

The algorithm (Lunit Insight for Chest Radiography) was trained using 54,221 chest radiographs with normal findings and 35,613 chest radiographs with abnormal findings. The researchers then assessed the algorithm's performance on an external validation set from five institutions that consisted of 486 normal chest radiographs and 529 chest radiographs with abnormal results. It produced a median area under the curve (AUC) of 0.979 for image classification and 0.972 for lesion localization.

Next, the researchers enlisted 15 physicians -- five nonradiology physicians, five board-certified radiologists, and five thoracic radiologists -- to participate in an observer performance study on a subset of the external validation dataset.

In the first of two sessions, the observers independently assessed and classified each chest radiograph and localized each lesion using freehand annotation. The observers re-evaluated each study with help from the algorithm in the second session and modified their original decision, if necessary.

Compared with all physician groups in the study, the algorithm yielded significantly higher performance for both image classification (p < 0.005) and lesion localization (p < 0.001). After using the AI algorithm, however, all physician groups experienced statistically significant improvements in image classification (p < 0.005) and lesion localization (p < 0.001).

Performance of AI algorithm for chest radiographs
  All physician groups AI algorithm All physician groups with assistance of AI algorithm
Image classification 0.814-0.932 0.979 0.904-0.958
Lesion localization 0.781-0.907 0.972 0.873-0.938

The AI algorithm's good performance for classifying chest radiographs also suggests potential for standalone use in certain clinical situations, as well as for prioritizing studies with suspicious abnormal findings requiring prompt diagnosis and management, according to the researchers.

"It can also improve radiologists' work efficiency, which would partially alleviate the heavy workload burden that radiologists face today and improve patients' turnaround time," they wrote.

The algorithm may also prove useful as a second reader, according to the researchers.

Page 1 of 370
Next Page