Structure may limit utility of decision support

Sunday, November 29 | 12:30 p.m.-1:00 p.m. | IN201-SD-SUA2 | Lakeside Learning Center, Station 2
In this poster presentation, a Henry Ford Hospital research team will discuss how the need for referring physicians to provide structured information to clinical decision-support software can be a limiting factor in its performance.

The research project was a follow-up to a study the group published last year, according to co-author Dr. Andrew Moriarity. The institution had created electronic decision-support software as part of the Medicare Imaging Demonstration Project, which was focused on outpatient imaging. The researchers also assessed its performance for ordering inpatient imaging, and they found that the clinical decision-support (CDS) software failed to "score" almost three-fourths of all requests, he said.

"Basically, clinicians were putting in the data, but not in a way that the software could understand," Moriarity told AuntMinnie.com. "In the first paper, we found that the majority of those requests that were scored tended to be appropriate, but there was room for improvement."

In the study being presented at RSNA 2015 by Dr. Chad Klochko, the group wanted to ascertain the exact composition of those unscored tests to see if they would be more likely to follow guideline recommendations, or if there would be a lot of inappropriate imaging. They gathered representative imaging studies before and after implementation of the decision-support software to see if having access to expert guidelines made any difference in the appropriateness of the imaging requests, Moriarity said.

During the baseline period, all requests that went through the CDS software generated an appropriateness score in the background that was not shown to the provider. During the intervention period, a screen was added to show the score generated by the software based on the American College of Radiology's Appropriateness Criteria. Providers could then modify the request, cancel it, or proceed without making any changes, Moriarity said.

The team found that 74% of inpatient advanced imaging at baseline and 71% with CDS did not contain sufficient structured clinical data to automatically generate an appropriateness criteria score at the point of care. However, after a human observer looked at the data that were entered into the software's free-text boxes by the ordering physician, those percentages dropped to 14% and 3%, respectively, Moriarity said.

"Basically, providers were giving the software (and by extension us, as radiologists) the information needed to determine the appropriateness of studies in the vast majority of cases, but they were doing so in a way that the computer could not understand," he told AuntMinnie.com. "The next step will be to figure out how to solve this problem from both ends -- helping providers use a more structured data-entry method, and also improving free-text processing by the software to extract meaningful information from the free text."

As expected, the amount of information provided by referring providers and the overall appropriateness of studies improved on a statistically significant basis when they used CDS, Moriarity noted.

Page 1 of 603
Next Page