Wednesday, December 3 | 1:40 p.m.-1:50 p.m. | W6-SSNMMI06-2 | S405
In this scientific session on innovations in nuclear medicine, a 3D AI model will be presented that can significantly improve the efficiency of reading whole-body F-18 FDG-PET/CT images by segmenting lesions on baseline scans.
Haoyue Zhang, PhD, a fellow at the National Cancer Institute who will present the study, and colleagues used a dataset from 57 patients with newly diagnosed multiple myeloma who underwent baseline F-18 FDG-PET/CT imaging and whole-body MRI, which was used as the reference standard. The 3D deep-learning model was previously developed on PET/CT images from 1,014 scans with various solid cancers from a publicly available cohort, AUTOPET.
Among 13 PET-positive patients, the AI model detected lesions in 10 (sensitivity, 76.9%), while maintaining a 63.6% specificity for the 44 PET-negative patients. Among 28 MRI-positive patients with 12 focal patterns, 11 diffuse patterns, and five combined patterns, the AI models identify eight, four, and one with focal, diffuse, and combined patterns of disease. The specificity against MRI was 69%.
The study demonstrated the model's capability to detect lesions that may be overlooked by radiologists or that are not apparent on PET scans to the naked eye, Zhang and colleagues wrote.
“Automated lesion quantification holds potential for supporting standardized imaging assessments in multiple myeloma,” they concluded
Plan on attending this Wednesday afternoon session to learn more.



