Deep learning accelerates segmentation for 3D printing

Wednesday, November 28 | 3:20 p.m.-3:30 p.m. | SSM13-03 | Room E353C
A group from South Korea has developed a deep-learning algorithm that can automate image segmentation for the 3D printing of kidney models, which may ultimately speed up their production.

The researchers from the University of Ulsan leveraged the rapid processing speed of a deep-learning convolutional neural network (CNN) to help increase the efficiency of making 3D-printed kidney models.

In recent years, research studies have demonstrated the various advantages of using patient-specific 3D-printed models to simulate complex procedures. One of the limitations of relying on 3D printing, however, is that processing medical images into 3D models demands a considerable amount of image segmentation -- a tedious and labor-intensive process, noted presenter Dr. Taehun Kim and colleagues.

Seeking to minimize segmentation time, they used a deep-learning CNN to automate and, thus, increase the production speed of 3D-printed kidney models for 36 patients with renal cell carcinoma. They used 80% of the imaging data to train the algorithm and the remaining 20% to test it.

After testing the algorithm, they found that image segmentation with the CNN had a similar accuracy to segmentation using the conventional method. Automated segmentation turned out to be highly accurate for the kidney parenchyma, or structural tissue, but slightly less so for the blood vessels inside the kidney.

The automated image segmentation application was able to dramatically cut image processing time for making the 3D-printed models, but more data are needed to improve the accuracy of segmentation for certain areas, including the kidney vasculature and lesions, according to the researchers.

Page 1 of 384
Next Page