Training key to VC performance, but how much is anyone's guess

Back in the early days of virtual colonoscopy, say the 1990s, the exam was performed almost exclusively at academic centers, where highly trained abdominal radiologists labored over small numbers of datasets, eager to demonstrate VC's viability for colorectal cancer screening.

Their success, and no doubt the need for less-invasive screening, has brought VC practice technique to smaller screening centers and community hospitals everywhere.

"There is substantial interest in the medical community, in the public, and in the media," said Dr. Jorge Soto, an associate professor of radiology at Boston University Medical Center in Boston. "Many radiologists are being asked, and in some cases forced, to start a VC program."

But virtual colonoscopy stands apart from some other CT exams by the difficulty of performing it well. Special software is required, the learning curve is steep, and leading practitioners are unanimous in their view that special training is required to do it well, Soto said in October at the 2004 International Symposium on Virtual Colonoscopy in Boston.

"There are some procedures -- RF ablation, PET scanning, cardiac MR -- in which we require focused training, CME courses, and hands-on training," Soto said. "(Virtual colonoscopy), I believe, is one of these."

The problem is that no reliable criteria exist to define adequate training. And as Soto explained in his talk, clinical studies on the topic have been maddeningly inconclusive. A leading hypothesis is that aptitude is an important factor in success, one that researchers hope will diminish somewhat as computer-aided detection comes of age.

The American College of Radiology's working group on CT colonography is looking at the issue, as are VC practitioners, who learn about teaching VC as they conduct their training workshops. At the same time, however, mediocre results seen in a couple of recent multicenter studies have brought training issues to the fore, Soto said.

"(Poor multicenter trial results) have created some doubts about the true diagnostic capability of CTC (VC), and the technique has become the target of very careful scrutiny by the media, by payors, and by radiologists and gastroenterologists," he said. "There is near-uniform agreement that experience affects diagnostic performance. The question is to what extent."

Training is just one of many issues at play in VC studies, of course, so its effects are difficult to measure directly. Yet there seems to be some kind of connection between training and results.

"There are many other factors -- from hardware and software to the quality of datasets, the quality of the gold standard, the effect of tagging agents -- that are perhaps as important or even more important than training," he said. "But beyond hardware, software, and technology, what are the minimum requirements? Can we quantify to determine what is or is not adequate? What is the effect of prior experience, both in CT and abdominal radiology?" The training lessons of general trials are hard to gauge.

In Pickhardt and colleagues' 2003 U.S. Department of Defense study, for example, radiologists each had 25 cases or more under their belts before the trial began, and produced excellent results, including sensitivity of 89% for polyps 6-9 mm in a setting of primary 3D interpretation (New England Journal of Medicine, December 4, 2003, Vol. 349:23, pp. 2191-2200).

On the other hand, some of the readers in the multicenter trial by Cotton et al had practiced on as few as 10 cases each before starting the study. Overall, the group's by-polyp sensitivity for lesions 10 mm and larger was just 55%.

A recently concluded multicenter trial led by Dr. Donald Rockey from Duke University in Durham, NC, did only slightly better, with reported overall sensitivity of 59% for lesions 10 mm and larger. But Rockey's group found no correlation between prior experience and performance, Soto said. (The study was subsequently published in The Lancet, January 22, 2005, Vol. 365:9456, pp. 305-311.)

Four dedicated studies

In a recent study published in European Radiology, a team from St. Mark's Hospital in London examined the effects of additional experience on newly trained readers, including a consultant, a research fellow, and a trainee. None had experience reading virtual colonoscopy data before training on 50 cases, after which each was given an additional 50 virtual colonoscopy datasets to report (European Radiology, June 2004, Vol. 14:6, pp. 1025-1033).

"Only the trainee significantly improved after training (p = 0.007)," though the trainee's reporting times increased, Taylor and colleagues wrote in their abstract, "with performance of other readers unchanged or even worse." Compared to the consultant, the odds of either the fellow or the trainee detecting larger polyps was 36%.

Overall, readers varied considerably in their ability to read virtual colonoscopy studies, though the team noted a distinct reading advantage associated with gastrointestinal radiology experience. Still, they concluded, "competence cannot be assumed even after directed training via a database of 50 cases."

In 2002, Gluecker and colleagues from the Mayo Clinic in Rochester, MN, looked specifically at the impact of the learning curve on performance in virtual colonoscopy. Two teams, each consisting of a radiologist and a gastroenterologist, examined 50 patients. The data were assessed for diagnostic accuracy after 24 cases (group 1) and after 50 cases (group 2). After the second 25 cases, the overall number of false-positive findings decreased significantly (p = 0.02), as did mean evaluation time, from 45 to 17 minutes. Overall sensitivity, and the number of false-negatives, didn't change, but specificity did rise from 42% in team 1 and 58% in team 2 to 79% for both teams in their reading of the second set of 25 cases (group 2) (European Radiology, June 2002, Vol. 12:6, pp. 1405-1409).

Finally, at the June 2004 European Society of Gastrointestinal and Abdominal Radiology (ESGAR) meeting in Geneva, Halpern et al reported that highly experienced readers pick up twice as many polyps, with fewer false-positives, compared to "novice" readers who had nevertheless read more than 50 cases each, Soto said.

"It's pretty clear from these four papers that not only do we not know exactly what we consider expert for interpretation," we don't know the steepness or slope of the learning curve for reading virtual colonoscopy exams, or where the plateaus are, he said.

There are always opinions

To get an idea of what the experts consider adequate training, Soto polled 18 leading VC radiologists in the field on training issues. A summary of his results showed that:

  • 100% believe that specialized VC training is necessary.

  • Two-thirds limited their training programs to radiologists.

  • 89% said the ideal course format consisted of formal lectures, plus supervised hands-on training at a workstation, in a program lasting a day and a half to two days.

  • 78% believed that training should include both primary 2D and primary 3D interpretation.

  • 83% said that training should include a minimum of 40 cases, with at least 10% "normal" cases. The respondents also favored including a wide range of abnormalities in the training datasets. 78% said that some cases should include fluid and fecal tagging.

  • 94% said that training should include issues beyond interpretation.

Soto's April 2004 survey found that 11/18 of the radiologists were currently conducting training sessions. All but two sessions consisted of formal lectures plus supervised workstation training, while two consisted of supervised workstation training only, he said. Only two courses offered a full training examination, while seven offered CME credit.

"I think there is unanimous agreement for focused and directed training for CTC (VC) interpretation," Soto said. "Unfortunately, there is less agreement as to what constitutes adequate training. The effect of prior experience is perhaps critical, and reader variability might be a factor that affects the results of multicenter trials." Workstation training is mandatory, he added, and more training studies can only help.

By Eric Barnes
AuntMinnie.com staff writer
February 8, 2005

Related Reading

CAD for VC improves reader performance, sensitivity for larger polyps, May 18, 2004

VC researchers push for study quality, consistency, March 17, 2004

Experience sharpens role of IV contrast-enhanced VC, January 19, 2004

Copyright © 2005 AuntMinnie.com

Page 1 of 660
Next Page