BERT-based AI poised for use in radiology

AI models based on Google’s BERT are poised to play a pivotal role in radiology, according to a review published January 30 in the Journal of the American College of Radiology.

In an analysis of 30 studies, researchers found that BERT has been successfully harnessed primarily for classification tasks and extracting information from radiology reports, noted lead author Larisa Gorenstein, MD, of Tel Aviv University in Israel, and colleagues.

“As BERT technology advances, we foresee further innovative applications. Its implementation in radiology holds potential for enhancing diagnostic precision, expediting report generation, and optimizing patient care,” the group wrote.

BERT (Bidirectional Encoder Representations from Transformers) is a natural language processing (NLP) foundation model introduced in 2018. Google’s Bard chatbot is based on BERT. The model is trained on a massive corpus of unlabeled internet text and can predict missing words in sentences by understanding context, the authors explained.

Moreover, BERT's pretrained weights can be fine-tuned, allowing it to transfer its learned language understanding for various specific NLP tasks, notably in radiology, they wrote.

In this review, the researchers assessed the scope of these tasks as they’ve been applied using BERT-based models in the field, with the goal of identifying new possibilities for broader clinical applications.

The authors conducted a search on PubMed for literature on BERT-based models and NLP tasks in radiology from January 2018 to February 2023. Out of 597 results, 30 studies met the authors’ inclusion criteria, with the rest unrelated to radiology. All of the studies were retrospective, with 14 published in 2022, the group noted.

According to the findings, “classification” – where the goal was to build a BERT-based model to predict classes of new findings based on patterns the models learned from labeled training data – was the most common task, with 18 studies in this category.

Six of these 18 studies were on classification tasks primarily focused on binary classification, with the main objective being to classify radiology reports into two categories: those containing findings that require further workup or intervention and those that do not, the authors wrote.

In addition, nine of the included papers focused on information extraction – the use of BERT-based models to automatically identify and extract structured information from unstructured radiology reports.

In addition, two papers focused on BERT-based models for automatically assigning CT protocols, while another two investigated the use of deep learning with BERT for interpreting chest x-rays.

“This review sheds light on the diverse roles of BERT-based models in radiology. Classification and information extraction are the primary applications, showing the potential of these models to process unstructured radiological data efficiently,” the group wrote.

Ultimately, based on the review, the authors suggested that integrating BERT-based models into clinical decision support systems could improve diagnostic accuracy by processing complex unstructured data, such as patient histories, alongside imaging results.

The studies were not without limitations, the authors noted, and these included that all had a retrospective design and most relied on data from a single institution. Moreover, similar to any machine learning model, BERT is susceptible to biases, with data bias being an important concern, they wrote.

“The wide range of clinical applications highlighted in our review, from protocol assignment to automatic interpretation, indicates a rapidly changing landscape in radiology,” the group wrote.

The authors declared that they used another NLP model, ChatGPT by OpenAI, to correct spelling mistakes as they prepared the article, but that after using the tool, they reviewed and edited the content as needed.

The full study can be found here.

Page 1 of 370
Next Page