Thursday, December 4 | 9:50 a.m.-10:00 a.m. | SSCH09-3 | Room E451A
Radiologists found that a generative AI model for interpreting chest x-rays was useful for worklist prioritization and quality assurance at their large radiology practice.
Developed from the practice's extensive database of chest x-ray studies and corresponding reports, the model analyzed all prospective chest x-ray data over a two-week period and then output text-based clinical reports.
A natural language processing (NLP) model with 155 outputs representing 155 findings on chest x-ray was run on the generated text report and the ground-truth radiologist report for all studies to compare the findings.
"Chest [x-ray] studies have a degree of interpretation error related to inherent modality and visualization limitations," the presenter, data scientist Robert Harris, PhD, and colleagues noted in advance.
The model ran on 34,680 studies over the period of the study. Studies for which the generated report was positive for pneumothorax were prioritized in the reading queue. And studies that had discrepant findings between the model-generated and radiologist reports were sent to secondary review.
During the run, sensitivity and specificity for pneumothorax were 62.4% and 99.3%, respectively, according to the group. In addition, of the 36 studies flagged for secondary review, 25% (9) were positive for missed pneumothorax.
Interestingly, 44% of radiologists rated the model-generated report as equivalent in quality to their report. The model may relieve radiologist workload to some extent and help alleviate the national radiologist shortage, according to the group.
Attend the session to learn more.


