Sunday, December 1 | 9:50 a.m.-10:00 a.m. | S1-SSCH01-5 | E451A
This scientific paper may increase overall confidence in the potential of using multimodal AI for tuberculosis (TB) detection, and potentially autonomous reporting, on chest radiographs in certain clinical settings.
The research comes out of Seoul National University Hospital in South Korea and the University of Maastricht in the Netherlands on the feasibility of relying on multimodal generative AI in TB detection. To that end, AI could increase efficiency and provide performance comparable to radiologists in areas of high prevalence of TB but where there are few radiologists, according to the team.
For the analysis, radiologist Eun Kyoung (Amy) Hong, MD, and colleagues used two chest x-ray public datasets, Shenzen and Montgomery from the U.S. National Library of Medicine, and an AI report generation model (karacxr.ai) available online for research purposes.
The study involved one radiologist extracting text descriptions from AI-generated reports of all lung and pleural abnormalities, except lung inflation status, the abstract noted. The team tested the stand-alone performance of the AI model in detecting TB-related abnormalities, comparing it to independent readings of three radiologists. They also tested autonomous reporting.
The AI model demonstrated sensitivity, specificity, and accuracy of 94.4%, 89.2%, and 91.8% respectively, where that of three radiologists' ranged between 91.9% to 94.7%, 89.4% to 94.7%, 89.4%to 96.8%, and 92% to 94.5%, respectively, in detecting abnormalities in TB patients, the team noted. The AI-generated reports achieved autonomous reporting rates of 73.1% (585/800) and 61.4% (491/800), as assessed by two radiologists.
While further validation is necessary to fully establish the model's clinical efficacy and ensure its reliability in broader clinical practice, Hong and colleagues said the multimodal AI report generation model showed promising performance. Stop by the session with your questions.