Radiologists at one New Zealand institution were able to read wrist x-rays just as well on a PC as they did on a PACS workstation, despite the substantial technical differences, according to a Radiology report.
"Radiologists are often called on to look at radiographs on a PC when a workstation is not available.... It is important to know how accurate the PC-based interpretation of radiographs is in comparison to that with a PACS workstation.... Wrist fractures are often subtle, thus providing a reasonably rigorous test of observer performance," wrote Dr. Anthony Doyle and colleagues at Middlemore Hospital in Auckland (Radiology, October 19, 2005).
For this retrospective study, Doyle's group recruited seven observers, ranging in experience levels from a radiologist with more than 20 years in general and musculoskeletal radiology to a preradiology intern. The team gathered 259 wrist x-rays from the hospital's PACS (Pathspeed, version 8.1, GE Healthcare, Chalfont St. Giles, U.K.), including fractures and nonfractures.
X-rays were obtained using photostimulable phosphor plates and the images were digitized. These 12-bit images were sent to the PACS short-term storage device, where they could be retrieved either at the workstation or on a PC via a Web browser. Viewing images on the latter did require that window and magnification tools be used, the authors stated.
The x-rays were assigned a number from 1 to 259, then divided into blocks of 50. Each observer viewed the first 50 images at a workstation and the next 50 at a PC, going back and forth between the two until all 259 had been read. After a four- to eight-week interval, the observers read the radiographs again and the viewing platform was reversed.
After assessing whether fractures were visible, each observer scored the x-rays on a scale of 1 (definitely normal with no fracture) to 5 (definite fracture). A score of 3 meant the observer was unsure of the diagnosis. To calculate sensitivity and specificity, a score of 3 was called positive for fracture while a score of 1 or 2 was deemed negative, the authors explained.
The results showed that all but one of the observers indicated a preference for the workstation over the PC. All took a slightly longer time to review the images on the PC, although no time limit had been set for this study.
The area under the ROC curve (Az) for the PC was 0.9103 versus 0.9181 for the workstation. "The Az values and 95% confidence intervals for the individual readers showed no significant difference between the PC and workstation," the group wrote.
The average sensitivity with the PC was 84% for a score of 3 or higher; the sensitivity was 1% higher on the workstation. Specificity was similar at 82% for the PC and 81% for the workstation. The average accuracy -- 83% -- was the same for both.
Assessing the memory effect, the authors found 610 incorrect diagnoses. However, in 36% of the cases, the observer changed the diagnosis after seeing the same image on the workstation after the PC.
"The observers performed to an essentially identical standard with both platforms," the authors concluded. This is despite the fact that the PC monitor has intrinsically more noise and convergence error, cannot display as many pixels, and is only a little more than one-third as bright as the workstation monitor, they stated.
Still, a PC "cannot replace a workstation for routine diagnostic reporting purposes, especially when substantial volumes of work are involved," they added. Also, equivalent performance may not hold true for other organ systems.
However, the upside of these findings is that reliable PC reads could increased read-through, as well as reduce the financial burden of having to provide workstations, the authors pointed out.
By Shalmali Pal
AuntMinnie.com staff writer
October 31, 2005
Related Reading
MR shows strength in imaging long bone fractures, October 21, 2005
MDCT, scintigraphy yield very different reads on wrist fractures, May 18, 2005
Copyright © 2005 AuntMinnie.com