U.K. software assesses radiographers who read images

2016 02 04 12 46 39 352 Review Button 200

In the U.S., the interpretation of medical images has long been considered the purview of radiologists. But things are different across the Atlantic Ocean, particularly in the U.K., where radiographers are being called on to interpret images in some situations. The question is, how well are they doing?

A new software tool offers the ability to answer that question by assessing and monitoring the interpretation skills of radiographers. Called RadBench, the package measures image interpretation performance metrics such as sensitivity, specificity, and accuracy. The software was developed at Sheffield Hallam University in Sheffield, U.K., and researchers from the institution discussed their experience in a new study published in Radiography.

"Traditionally, image interpretation was the domain of the radiologist, but this is changing," lead author Chris Wright, PhD, told AuntMinnie.com via email. "For example, increasingly musculoskeletal reporting in many countries of the world is being done by highly trained radiographers, not radiologists."

Testing, 1-2-3

Image interpretation by radiographers currently has three different levels, according to Wright and co-author Pauline Reeves, PhD:

  • Radiographer abnormality detection systems, which allow radiographers to place a red dot on possible areas of pathology
  • Preliminary clinical evaluation systems, which allow them to replace the red dot with a written comment
  • Clinical reporting, which is performed by reporting radiographers who have had postgraduate training

But there has been little in the way of assessment resources for any of these functions, Wright said.

"There are several e-learning tools for image interpretation out there, but no measurement platform for accuracy, sensitivity, and specificity. RadBench was developed to make image interpretation assessment available around the world," he said. "Not only can respondents measure their own performance, but organizations can also compare themselves to other organizations."

Chris Wright, PhD, from Sheffield Hallam University.Chris Wright, PhD, from Sheffield Hallam University.

For the study, Wright and Reeves created two tests for RadBench, each of which contained 20 appendicular musculoskeletal images. Half of these images were normal and half contained fractures. The test sets included ankle, foot, knee, hand, wrist, and elbow images; three of each set of 20 images were from children, while 17 of each set were from adults (Radiography, January 20, 2016).

In all, 42 radiology professionals participated in the pilot study, 34 of whom were general radiographers with no training in reporting. Three were reporting radiographers with specialized training in image interpretation, while two were radiologists and three were medical imaging academics.

Participants viewed images sequentially, although each participant had the option to go back and forth within the image set if desired, and each image could be viewed full screen. Participants were asked to rank each image on a five-point scale, with 1 equal to "definitely normal," 2 to "probably normal," 3 to "possibly abnormal," 4 to "probably abnormal," and 5 to "definitely abnormal."

Once participants submitted their tests, RadBench calculated each person's sensitivity, specificity, and accuracy.

What did RadBench show? Reporting radiographers, radiologists, and medical imaging academics all scored 95% to 100%, while general radiographers scored between 60% and 95%, Wright and Reeves found.

Reporting accuracy by profession as scored by RadBench
  Mean accuracy Mean sensitivity Mean specificity
Radiologists/reporting radiographers/academics 99% 98% 100%
General radiographers 82% 89% 75%

The study results showed that general radiographers had higher sensitivity scores than specificity, and therefore efforts to train them in reading images should focus on the ability to recognize normal anatomy, Wright and Reeves wrote.

"The ability to identify fractures was better than the ability to identify normal variants," they wrote. "The overall effect was to reduce the accuracy score. [So this could be] the first point of focus for developing the nonreporting radiographer population."

Game changer?

Decision-making in musculoskeletal imaging is moving away from the radiologist and into the purview of reporting radiographers, Wright and Reeves wrote. But reporting radiographers alone are unlikely to be able to keep up with healthcare demand -- which is why training and monitoring general radiographers is a good idea. Not only could something like RadBench change the way imaging professionals are trained, it could offer radiographers a concrete way to develop their field, according to Wright.

"RadBench could offer hiring organizations a way to evaluate radiologists as part of the recruitment process, and it could be a very helpful educational supplement for medical students, junior doctors, and trainee radiologists," Wright told AuntMinnie.com. "But especially for radiographers, RadBench is a potential game changer. They want to develop as a profession, and they need evidence that they can perform to an equal standard as radiologists."

Page 1 of 3619
Next Page