Researchers from the University of Chicago were able to cut their reject rate for digital radiography (DR) studies by almost 50% through an aggressive program of reject analysis and radiologic technologist education. However, they found their efforts hampered by the lack of an industry-standard terminology for rejected DR studies.
The university's reject rate for digital x-ray exams -- including both digital radiography and computed radiography (CR) -- fell from 24.5% to 13% over the course of the project. That's a major reduction, but it was still short of the reject rate of less than 10% targeted by the researchers at the beginning of their investigation, according to an article published online September 20 in the Journal of the American College of Radiology.
The group encountered a number of obstacles during the project, including the need to manually download data from individual DR units -- a hassle that could be eliminated if DR vendors developed industry-standard terminology for rejects that would enable data to be automatically sent over a network for analysis.
Rising reject rates
It's simply a fact of life that some x-ray studies have to be rejected due to poor patient positioning or other technical factors. Rejected images impede patient workflow, waste healthcare resources, and can expose patients to unnecessary radiation.
Imaging facilities always strive to keep their reject rate low -- but not too low: A reject rate of 0% would indicate that technologists are probably sending some x-rays to radiologists that aren't suitable for interpretation. A reject rate between 6% and 10% is probably optimal, according to a 2015 report from a working group of the American Association of Physicists in Medicine (AAPM).
The arrival of computed radiography made it easier for imaging facilities to drive their reject rate lower because images with exposure problems could be fixed with postprocessing. But the trend has begun to reverse as CR is replaced by DR at most U.S. imaging departments.
DR units already come with software that analyzes rejects. In an ideal world, the data could be transferred over a network into a single unified database and analyzed using big-data techniques. But we're not there yet: Instead, there is a Tower of Babel as different DR machines use different data nomenclatures, such as different terms for why images are rejected, or varying descriptions for patient anatomy.
At the University of Chicago, managers and medical physicists began to suspect that the facility's reject rate was rising after its adult radiography department converted to DR in 2013. But since they could only perform reject-rate analysis based on technologist self-reporting, it was hard to know for sure.
Part of the problem was that the university operates DR systems from multiple manufacturers, all spitting out data in different formats, according to the group led by Kevin Little, PhD, who was a resident in physics at the university during the study. Little is now a medical physicist at Ohio State University.
"We had all these different numbers from all these machines, and compiling that into one central place was difficult," Little told AuntMinnie.com. "We wanted to be able to look at things across the board, as opposed to at a single radiography machine."
So the researchers began a project to create a unified database that would make things easier. First, they developed a set of five standardized categories to describe why an image might be rejected, and they mapped each vendor's data to the categories. Next, they created standardized anatomical areas and mapped vendor data to those.
Some DR units weren't outputting exposure logs to the network, so technologists had to go to the machines manually and offload the data using a USB stick. It could then be imported into the database for formatting.
Alarming findings
Little and colleagues began analyzing the initial data that were acquired starting in September 2014, and they were surprised by what they found -- an overall reject rate for DR studies of 24.5%, much higher than the target of under 10% and "alarming for physicists, radiologists, and managers," the authors wrote. Reject rates for some specific anatomical areas were even higher, including a reject rate for chest exams of 27.1%.
The project team began working to reduce the reject rate through education and training with technologists, but this only went so far, with a reject rate of 17% recorded in December 2014. So in January 2015 they began to monitor reject rates for individual technologists -- a step they had been reluctant to take at the start.
Technologists who had the highest reject rates were notified and offered individual coaching sessions. The goal of a reject rate lower than 10% was noted, and radiologic technologists were told that those who had the highest reject rates would be expected to show personal improvement over time. In addition, the project team held another educational presentation at staff meetings in October 2015.
As a result of these interventions, reject rates began dropping, falling to 12.1% in December 2015, which was the end of the study period.
But the group still encountered some challenges in managing reject rates. For one of its DR vendors, it was difficult to match acquisition data from the machine to the technologist performing the study, as the acquisition data didn't have any identifiable information that could be searched in the RIS, and the group couldn't calculate individual reject rates.
Going forward
Little and colleagues recommended a number of improvements that could make the job of calculating reject rates easier. For example, it would be helpful to have exposure logs that included data that were standardized across DR vendors. This would require cooperation between vendors and possibly a new standard from the National Electrical Manufacturers Association (NEMA). At least one of the university's DR vendors is working on reformatting its data into a more usable format based on the group's work, Little said.
The researchers also provided recommendations for the minimum information they believe should be included in a standardized exposure log, including information related to rejected images such as identification as a reject, reason for the rejection, rejection date and time, and a link to a thumbnail of the rejected image. In addition, data in the log should be accessible over a local network, or stored securely in the cloud, to make automated data analysis easier.
The university's reject rate has stabilized at about 13% in recent months, the group wrote. That's higher than the goal of less than 10%, but it's within the range for DR reject rates reported in the literature. The researchers noted that managers hope to make additional progress by using individual reject rates as a metric for performance reviews of individual technologists.
"We knew that it can take a long time to change habits. We have a lot of technologists and lot of people, and it can take a long time to change the attitude and culture," Little said. "It has been a good learning experience for everyone -- it's good to have a target that everyone can work toward."