Novel viewing methods are an important part of radiologists' hopes for better ways to examine CT colon data. Toward this end, researchers have sought to offer ways to eliminate blind spots and examine more of the colonic mucosa in numerous different ways.
VC viewing innovations have included Mercator projections, cubed displays, "unfolding" algorithms that flatten the colonic surface to make lesions easier to see, and even colon painting programs that clearly mark regions the radiologist has examined.
But all of these methods have shortcomings that either produce substantial blind spots or lengthen review time significantly. Now researchers from the Korea Advanced Institute of Science and Technology in Daejeon, South Korea, have set out to develop a way of reducing unexamined regions of the colon walls (IEEE Transactions on Medical Imaging, August 2005, Vol. 24:8, pp. 957-968).
The new method is based on automated path planning, a sequence of camera-pose parameters, which are composed of "view positions referring to the camera location, view directions referring to the line of sight, and up-vectors referring to the upper direction of the image acquired from a camera or displayed on a screen," according to the authors Dong-Goo Kang, Ph.D., and Jeong-Beom Ra, Ph.D.
The development of automated path-planning algorithms has been essential for the virtual fly-through function because the complex shape of the human colon makes manual planning difficult and time-consuming, the authors wrote.
"For complete and accurate diagnosis, a planned path should not produce significant blind areas on the colon surface," Kang and Ra noted. "However, a recent study shows that with existing path-planning algorithms, more than 20% of the colon surface is in a blind area" (Academic Radiology, December 2003, Vol. 10:12, pp. 1,380-1,391).
"In existing path-planning methods, the centerline is regarded as the best camera position for efficient and comfortable navigation," they wrote. "For years these algorithms have focused on improving the centeredness, robustness, and execution speed of the centerline abstraction."
However, traditional path-planning tools can produce significant blind spots in the tortuous environment of the colon, with its haustral folds and steep curves. To reduce the resulting blind spots, the centerline-based system would need to be modified to "pause" and "look up and down" between haustral folds, according to the authors.
In this manner, the new technique seeks to minimize blind regions by assuming that the practical blind area between the folds is clinically negligible when antegrade and retrograde navigation are combined, they wrote.
"Our proposed algorithm first approximates the surface of the object by estimating the overall shape and cross-directional thicknesses," the researchers stated. "View positions and their corresponding view directions are then jointly determined to enable us to maximally observe the approximated surface. Moreover, by adopting bidirectional navigations, we may reduce the blind area blocked by haustral folds. For comfortable navigation, we carefully (smooth) the obtained path and minimize the amount of rotation between consecutively rendered images."
The visibility coverage used to define a camera path is performed by the following:
- Segmenting a volume of the colon lumen and extracting its surface points
- Detecting spatially visible surface points
- Counting temporally visible or observable surface points
- Calculating the visibility coverage as the ratio of the number of observable surface points to the number of total surface points on a colon segment
The path-planning algorithm involves two steps: estimation of the initial camera positions and directions, and then smoothing those positions. The first step is accomplished by segmenting a region of the colon data using a seeded region-growing volume, then the surface is approximated by determining the centerline and the 3D distance map.
The second step simultaneously determines the view positions and directions needed to maximally observe the approximated colon surface. The view positions are smoothed to provide for comfortable navigation. Smoothing is accomplished by removing high-frequency components in the centerline while preserving local curvedness.
"By using a low-pass filter with an appropriate cutoff frequency, unwanted high-frequency components can be suppressed while preventing a big shift of the original centerline," the researchers wrote. In the event the smoothed centerline is pushed outside the colon, "we detected the (oversmoothed) part and reapply the filter with a higher cutoff frequency to the corresponding original centerline so that the whole centerline may be located inside the colon lumen."
Finally, the "up-vectors" (the upper directions of an image acquired from a camera or displayed on a screen) are determined after obtaining the view positions and directions. "In existing path-planning methods, tangential vectors on the centerline are generally used as view directions, and the decision method of up-vectors has been out of favor in current research," the authors noted. Yet without this step, blind areas may exist even with bidirectional navigation when the fold-to-fold distance is not long enough.
"The total surface of an object can be approximated by a set of slice surfaces," the authors wrote of their algorithm. "Hence, if we choose each camera position and its corresponding view direction so that the camera can view the corresponding slice surface, most of the surface points along the path are spatially visible. Furthermore, when we sequentially view consecutive overlapping slice-surfaces as the camera moves, the points on the surface may be temporarily visible because the slice surfaces overlap."
To evaluate their algorithm, the researchers quantified the overall observable area on the basis of the temporal visibility that reflects the minimum interpretation time of a human reader.
They compared their algorithm to a conventional algorithm in which view positions are determined from the centerline, and view directions are set to the tangential direction of the centerline. To generate a fair comparison, both algorithms used an antegrade and retrograde navigation, and the same number of view positions or images was extracted in each comparison so that the time of the fly-through would be the same.
"The interval of the sampled point on the centerline determines the degree of overlap among the slice surfaces," Kang and Ra wrote. And "the degree of overlap directly affects two important parameters: the observing time for shape interpretation and the total navigation time." Shape and navigation time were therefore determined before fixing the overlap or navigation speed. For a colon of 1.5 meters, and an interval between neighboring center positions of about 1 mm, total evaluation time was 300 seconds.
Assuming that surface points should be displayed for at least one second, "the proposed algorithm enables us to observe between 96% to 99% of the surface points for a camera FOV of 60º," the team wrote. "These results are 3% to 6% better than the results of a conventional algorithm."
The results are degraded somewhat with the use of a smaller field-of-view, in part because it increases the invisible area generated between the haustral folds. However, the proposed algorithm always outperforms its conventional counterpart. And while some areas between the folds are still invisible, most are smaller than 5 mm.
"The graphs clearly show that our proposed algorithm not only reduces the number of blind patches, but also concentrates the diameter of the blind polyps to below 5 mm," Kang and Ra wrote. "Thus, the visibility increases due to our algorithm further improves the ability of clinical diagnosis.... The results of our simulation show that our proposed algorithm improves visibility at the highly curved regions of a human colon."
Finding a way to minimize the blind areas between haustral folds may be a worthy subject for further research, they added.
By Eric Barnes
AuntMinnie.com staff writer
August 30, 2005
Related Reading
Colon CAD: VC's extra eyes face new challenges, August 5, 2005
Part II: Computer-aided detection identifying new targets, July 15, 2005
Part I: Computer-aided detection marking new targets, July 14, 2005
VC CAD matches prone and supine imaging data, June 28, 2005
CAD struggles through tagged, subtracted VC data, May 18, 2005
Copyright © 2005 AuntMinnie.com