Mayo trains modified BigBird model for autoprotocoling

Liz Carey Feature Writer Smg 2023 Headshot

Wednesday, December 3 | 9:30 a.m.-9:40 a.m. | W3-SSIN06-1 | Room E450B

This session will provide a performance report for two different models for automatically generating protocols for imaging studies.

Using nearly 500,000 Mayo Clinic region-based patient records, data scientists have trained three modified, extended context sparse-attention based transformer BigBird models to automatically predict imaging protocols from free-text reports.

The project involved constructing large language model (LLM) input prompts from compiled Mayo Clinic patient records, normalizing medical terms to unified medical language system concept unique identifiers (UMLS CUIs), and, importantly, including patient "problem lists" (PLs), according to presenter Barbaros Erdal, PhD, and colleagues.

Ahead of RSNA 2025, the group reported "exceptional performance" when the LLMs included the PLs. The division-based model that included PL provided F1 scores of 0.86, 0.89, and 0.88 for Mayo Rochester, Florida, and Arizona, respectively, according to the results.

However, "having one model per-region rather than per-division we have experienced to be easier to maintain," the group noted in their scientific abstract.

Ultimately, the group found that the modified BigBird model (extending BERT to sequences of 8,192 tokens) could automatically predict imaging protocols from free-text patient reports and may be useful for streamlining workflows and reducing manual protocol selection errors. The models developed showed acceptable F1 performance for most radiological protocols in the Mayo dataset.

Erdal serves as technical director for the Center for Augmented Intelligence in Imaging in the department of radiology at Mayo Clinic in Florida. Attend to hear how all models scored and ask questions.

Page 1 of 2
Next Page