Dr. Vidur Mahajan, associate director of Mahajan Imaging in New Delhi, will present results from a deep-learning model that uses a single algorithm to automate the reading of normal chest x-rays and virtually eliminate the drudgery of second reads.
The deep-learning model was trained on approximately 250,000 chest x-rays from CheXpert, an artificial intelligence (AI) model developed by the Stanford University Machine Learning Group, along with some 50,000 chest x-rays from a U.S. National Institutes of Health (NIH) dataset.
The algorithm was tested on three datasets that included a total of approximately 4,000 cases. Two of the datasets came from three outpatient imaging centers and three hospital imaging departments; the third dataset was used to validate the AI model.
Using a sensitivity threshold of 97%, specificity of the first set of chest x-rays ranged from 2% to 41% among the three datasets. After tuning the first set of chest x-rays with a single reference image from the NIH chest x-rays, specificity increased to 29% to 63%.
Based on the "drastic improvement in results," the deep-learning model can be generalized across equipment and institutions by using a "single reference image to tune the functioning of the model, hence showing potential to improve the functioning of deep-learning algorithms in general," the researchers concluded in their abstract.