LLMs perform well for annotating radiology reports

Erik Ridley Headshot

Monday, December 1 | 9:30 a.m.-9:40 a.m. | S4-SSIN02-1 | Room E450B

In this scientific presentation, researchers will exhibit a framework for deploying large language models (LLMs) in real-world radiology settings.

LLMs have rapidly evolved, opening up promising opportunities in the annotation of radiology reports, particularly for identifying specific diagnostic findings, noted presenter Mana Moassefi, MD, of the Mayo Clinic, and colleagues. In their study, the authors assessed the effectiveness of a human-optimized prompt for LLMs in extracting radiology diagnoses from radiology reports across six major U.S. institutions.

After prompt engineering was conducted at the Mayo Clinic, an open-source Python incorporating the prompt was then distributed to all six sites. The script enabled local execution of a common LLM, enabling consistent analysis and comparison against site-specific reference annotations, according to the researchers.

They found that the prompt showed high consistency across sites and pathologies.

“This study demonstrates a practical, collaborative framework for deploying LLMs in real-world radiology settings,” Moassefi and colleagues wrote in the abstract. “By using standardized prompts and local execution, institutions can leverage LLMs to rapidly annotate reports with minimal overhead -- laying the groundwork for scalable AI integration in clinical workflows."

In future plans, the researchers will explore the robustness of the model to diverse report structures. They also would like to further refine prompts to improve generalizability.

Learn more at this Monday morning talk.

Page 1 of 2
Next Page