In chest x-ray AI, two views are better than one

2021 09 20 22 43 8994 Chest X Ray Lateral View 400

An artificial intelligence (AI) model that analyzes both frontal and lateral chest radiographs performs better for classifying adenopathy, according to a September 20 presentation at the Conference on Machine Intelligence in Medical Imaging (C-MIMI).

Researchers from Johns Hopkins University trained deep-learning models using frontal or lateral x-rays and then in combination. They found that the combined algorithm yielded the best results for classifying thoracic adenopathy.

"This is actually analogous to the common practice of radiologists who use both projections -- frontal and lateral -- to glean synergistic diagnostic details," said presenter Ishan Mazumdar, a second-year medical student.

Detection of pulmonary pathology on chest radiographs heavily emphasizes the frontal projection, while excluding the lateral chest x-ray that's obtained in a variety of clinical settings, according to Mazumdar. However, radiologists routinely utilize the lateral chest image to detect certain abnormalities that are more subtle and harder to see on the frontal projection.

"One example is hilar adenopathy, which is basically enlarged lymph nodes in the hilum of the lung and is present in a number of inflammatory, infectious, and neoplastic conditions," he said.

Although adenopathy is often an important indicator of pathology, there hasn't been much AI research on the detection of adenopathy on chest radiographs. The Johns Hopkins research team hypothesized, however, that a deep-learning model that was trained on both frontal and lateral chest radiographs would perform better than a model that's trained on only one type of the images.

To test this, the group gathered a training and validation set from the PadChest dataset of radiographs, as well as Radiopedia for additional radiographs of hilar adenopathy specifically in patients with sarcoidosis. A training set of 2,107 paired frontal and lateral chest x-rays was ultimately used to train the models.

The researchers opted for a two-step deep convolutional neural network pipeline. In the first step, an object detection model was used to detect and crop the hilum on the radiographs. A hilum classification algorithm then classified the cropped image as having thoracic adenopathy or not.

They then tested the models on an external test set of 129 paired radiographs from patients at Johns Hopkins University who were diagnosed with sarcoidosis involving any organ. The test set had a higher prevalence of hilar adenopathy compared with the general population, Mazumdar noted. For the purposes of the study, the ground truth was determined by board-certified thoracic radiologists after reviewing contemporaneous chest CT images.

The performance of each model was averaged over 10 testing "runs."

Improved AI performance for thoracic adenopathy by including both frontal and lateral chest x-rays
  Frontal chest x-ray model Lateral chest x-ray model Combined model
Area under the curve 0.607 0.732 0.759

"We demonstrate an improvement in diagnostic accuracy when using deep-learning models trained using both frontal and lateral chest x-rays," he said.

In addition to hilar adenopathy, other chest pathologies may also be better detected on the lateral projection, according to Mazumdar.

"Therefore, deep-learning models that ignore the lateral chest radiographs are not taking full advantage of commonly available data," he said.

Mazumdar acknowledged the limitations of their study, such as the inclusion in the training set of radiographs of patients with any type of adenopathy -- not just from sarcoidosis. This likely lowered the diagnostic performance of the algorithm, as the test set specifically included patients with sarcoidosis, he said.

Also, the ground truth for adenopathy is difficult to establish and subject to interobserver variability, and the optimal method of combining frontal and lateral chest x-rays isn't yet known, he said.

In the next phase of their work, the researchers plan to investigate different deep-learning methods for generating combined frontal and lateral predictions.

Page 1 of 371
Next Page