3DHeals: Emerging technologies complement 3D printing

2018 03 14 00 19 5162 3 D Printing 400

How might emerging technologies work together as they expand their role in healthcare imaging? 3D printing experts shared their perspectives on the integration of 3D printing with augmented reality (AR) and artificial intelligence (AI) in several presentations at the 3DHeals 2018 conference last week in San Francisco.

Transformative and emerging technologies such as AR and AI might be able to replace valuable human skills, especially when technology from outside one industry collides with technology within that industry, said keynote speaker Jeff Sorenson, CEO of advanced visualization firm TeraRecon. But using these technologies can also elevate the contribution of clinicians, reduce medical errors, promote proactive care, and improve connections with patients.

"In tomorrow's world, there's going to be an entire new group of winners and losers, including in the field of radiology," he said. "There's a new group of technologies out there, and they're going to change the way we work and what we do."

Extending 3D printing's reach

Interdisciplinary communication for presurgical planning often entails a radiologist translating information from 2D medical images and sharing it with surgeons, who then have to build a mental model of what they are told in person or read in a report, Dr. Jesse Courtier of the University of California, San Francisco (UCSF) told conference attendees.

Dr. Jesse Courtier from UCSF.Dr. Jesse Courtier from UCSF.

"There is a disconnect between the 2D world that we experience in radiology and the 3D, real world, where all that information is hidden underneath the surface," he said. "We can help bridge that gap using 3D imaging and using a 3D-printed model, where we can show, closer to native information, what a surgeon actually sees."

However, he also acknowledged that 3D-printed models may not be ideal for preprocedural multidisciplinary conferences involving clinicians situated in different parts of the country or world. Each participant would need his or her own model and printer, and the cost of creating them would be high.

"What if we could take that same [3D-printed] model and put it into a holographic model that could be downloaded onto someone's phone using an AR kit, a HoloLens, or some other version, where you could peel through the model like water, make it any color you want, and iterate lots of different versions without having to print it?" Courtier said.

Recently, Courtier and colleagues developed an iPhone app that can project medical images in stereolithography (STL) file format as holograms onto a toy cube (Merge Cube, Merge Labs of San Antonio). This would allow clinicians who install this application on their iPhones to see what a 3D-printed model would look like on a low-cost cube from a remote location and note any desired adjustments before printing it out.

These "holographic prototypes" make sharing 3D models much more accessible, and they may reduce the need for reprints or even replace costly or complex 3D-printed models, Courtier said.

"AR gives me a tool as a radiologist to translate what's in my head in a way that is a lot more relatable to surgeons," he said. "Something that would take a lot of complexity and time to print is now something you can make very quickly."

'Second image revolution'

Artificial intelligence is also demonstrating a growing role in 3D printing -- particularly for helping to process images before printing them as models -- although there's a potential role for machine learning in every single area of the field, according to Dr. Sanjay Prabhu, director of Boston Children's Hospital Simulator Program (SIMPeds 3D Print).

The SIMPeds 3D Print lab currently has in-house methods using computer sequences to fuse CT and MRI scans of different areas of the body automatically. One such tool can recognize where the neck stops and the abdomen starts in a series of images and accordingly combine these separate images for the 3D model.

In addition, there are existing algorithms designed for image editing software that can automatically segment medical images, he said. The simultaneous truth and performance level estimation (STAPLE) algorithm, for example, can compile information from multiple independent segmentations previously performed by researchers to help bolster their technique -- ultimately producing the best segmented model.

There are similar algorithms for automated organ detection and segmentation, as well as image registration, which can register anatomical landmarks and allow clinicians to overlay images from different modalities when creating a 3D model. AI also may be able to determine and suggest which 3D printer would be the ideal option for the type of model needed.

Still, there are challenges to making these tools available everywhere and not just in-house, such as getting access to large sets of annotated training datasets, improving collaboration between manufacturers and clinicians, and confirming that the failure rate of an algorithm is extremely low.

To that end, Prabhu advised attendees to "disrupt yourself if need be, quantify success beyond the 'wow, it is cool' factor, and quantify clinician feedback."

"We are on the cusp of the second image revolution," he said. "I think 3D printing is in the right place right now for innovation with AI, which has an important role to play in the image visualization space, including and beyond 3D printing."

Page 1 of 384
Next Page