Reliable identification and communication of urgent radiology results are crucial to the diagnosis and treatment of various diseases. At the hospital level, however, existing efforts to identify urgent findings from radiology reports have largely relied on human labor or naive rule-based systems, according to presenter Yuhao Zhang of Stanford University in Stanford, CA.
"Meanwhile, the recent advancements in deep learning and [NLP] have largely improved our ability to extract and understand information from text, providing us with opportunities to automate the detection of urgent findings from reports," Zhang said.
The researchers from Stanford and Brown University in Providence, RI, used historical radiology report data to train a deep learning-based NLP model to identify findings of different acuity levels from reports. They found that the model could surpass the performance of traditional feature-engineered classifiers by a large margin, Zhang said.
"Moreover, this model can explain its decision by highlighting the text relevant to its assigned acuity levels," Zhang told AuntMinnie.com. "This demonstrates its potential of replacing human labor on the identification of urgent findings."
Automating the assignment of acuity codes can reduce variability and, therefore, improve communication between radiologists and referring providers, said principal investigator Dr. Curt Langlotz, PhD, also from Stanford.
"In this study and others, we find that deep learning improves accuracy over older methods that employ handcrafted word features," he said.
How did they achieve these results? Check out this Tuesday morning presentation to find out.