Simulator training helps nonrads read chest x-rays

2015 11 09 18 56 53 709 Training Return Button 200

Nonradiologists can be taught to better detect nodules on chest radiographs using a training program that simulates a real-world radiology environment, rather than through more traditional didactic methods, according to a new study in the November Journal of the American College of Radiology.

A group of nonradiologist healthcare providers -- most of whom were medical students -- turned in a statistically significant improvement in the detection of lung nodules on chest x-rays after they participated in the training program, which instructed them to review images methodically on a simulated PACS workstation. After the experiment, most participants said they preferred the simulated training to more conventional learning, wrote researchers from Emory University and the Georgia Institute of Technology.

The study could offer a more effective mechanism for training nonradiologists, many of whom are being asked to identify abnormalities on chest radiographs before radiologists have a chance to read the images, according to the research team led by Dr. William Auffermann, PhD, of Emory (JACR, Vol. 12:11, pp. 1215-1222).

Traditional radiology training

Traditional training for medical image interpretation has relied heavily on materials that aren't very interactive, such as textbooks, journal articles, and Internet-based resources, the authors noted. At the same time, recent studies have indicated that more-interactive methods are enjoyable and can facilitate learning.

In recent years, radiology has seen the rise of more-interactive educational resources, but there is still a gulf between how radiology is taught and how it is actually practiced. So the authors decided to test the effectiveness of a training program that ditched the textbooks and instead closely paralleled the real-world radiology environment.

What's more, the group decided to test the simulation program on nonradiologists to address another emerging issue in healthcare: nonradiologist providers who are being called on to view medical images, such as to assess critical abnormalities on chest radiographs in emergency situations, determine the placement of central lines, or detect a pneumothorax.

"Although a radiologist often provides the final interpretation of an image, there are many instances when the image is initially viewed and interpreted by a nonradiologist," the authors wrote. "Simulation may be especially useful for nonradiologists because they may be less likely to receive comprehensive training at image perception and interpretation at a PACS viewing station."

To test their hypothesis that education could be improved through simulation, Auffermann and colleagues recruited 30 nonradiologist healthcare providers (22 of whom were medical students), who were split into two groups: experimental and control.

Images were viewed on a desktop personal computer that simulated a clinical radiology PACS workstation. The PC ran ViewDEX software (Sahlgrenska University Hospital, Goteborg, Sweden), which selects and presents images randomly from a database, using viewing controls that are like those of a PACS workstation. Readers use the software to mark lesions as suspected nodules, and the software records the coordinates and the readers' confidence ratings.

After viewing a 10-minute computer presentation on chest radiography and familiarizing themselves with the software on five practice cases, both groups of readers were asked to interpret 25 chest radiography cases at the workstation. They were told to identify and mark suspicious nodules, and assign a score for diagnostic confidence on a scale of 1 to 5. About half of the cases contained suspicious nodules.

After the first set of cases were read, a course in search-pattern training was given to the experimental group. These readers were instructed to review lung radiographs in a specific pattern, starting at the top and sweeping their eyes over each intercostal area, "starting medially and ending at the peripheral aspect of the pleural space," according to the authors. The entire training session given to the experimental group lasted 60 to 90 minutes.

Both groups of readers were then given another set of 25 cases to read, and the researchers assessed their accuracy with both sets of cases, using the area under the curve (AUC) as applied to localization receiver operator characteristics (ROC), basically similar to standard ROC but also including a reader's ability to localize lesions.

As the table indicates, AUCs for the control group rose with the second set of cases, but the difference was not statistically significant. On the other hand, the experimental group did see a statistically significant increase in performance after the simulator training session.

Reader performance for lung nodules after simulator training session
Group AUCs for case set 1 AUCs for case set 2 p-value
Control 0.6662 0.7184 0.191
Experimental 0.6944 0.8024 0.15

The researchers gave participants in the experimental group a survey on their preferences for training. The results showed that they preferred the simulator-based method by a statistically significant margin.

The eyes have it

In analyzing the findings, Auffermann and colleagues made several points. They believe that radiology training using a simulator environment could be superior to conventional training because trainees can be exposed to a greater volume of cases than they might see during their time on a clinical service -- without exposing patients to negative outcomes if the trainee makes an incorrect decision.

At the same time, training readers to evaluate images using specific eye movements could be a good educational tool, as previous studies have indicated that perceptual errors have resulted from scanning errors.

The researchers further noted that both training methods could be particularly valuable for nonradiologist healthcare providers, who are less likely to be trained in image perception and interpretation at a PACS workstation. They recommended more research into whether the training techniques in the study could be applied to radiologists.

"Because clinical medical image interpretation often occurs on computer workstations, training to perceive and interpret abnormalities on medical images accurately lends itself well to computer simulation," they concluded.

Page 1 of 114
Next Page