Optimal CT image compression should provide the minimum possible data size while accurately controlling the degree of compression artifact. However, compression ratio -- the de facto standard index of compression level -- has been shown not to directly correlate with the degree of compression artifact, according to Dr. Kil Joong Kim of Seoul National University Bundang Hospital in Korea.
To see if there was a better option, the researchers compared compression ratio and two alternatives -- a mathematical metric (peak signal-to-noise ratio) and a perceptual quality metric (high dynamic range visual difference predictor) -- in 250 body CT scans obtained with five different scan protocols. The images were compressed using five compression ratios: reversible, 6:1, 8:1, 10:1, and 15:1.
After viewing alternate displays of the original and compressed images, five radiologists independently determined if the two images were distinguishable or indistinguishable. The study team later performed receiver operator characteristics (ROC) analysis to compare the three indices.
In their scientific paper presentation in Chicago, the researchers will make the case that it's time to revisit the current standard of using compression ratio as the index of image compression level. While the perceptual quality metric was the best method, compression ratio also lagged behind the mathematical metric.