Practical Considerations of AI: Part 2 -- ROI variables

2019 02 20 17 10 9990 Cannavo Mike 2019 400 Thumb

As we discussed in part 1 of this series on the practical considerations of artificial intelligence (AI), getting paid for using radiology AI is off the table for now. So we need to look at other factors related to cost reductions and time savings, or other ways to show the always-crucial return on investment (ROI).

These variables include the following:

  • The potential to significantly improve outcomes and patient care, including readmissions
  • The ability to provide a faster diagnosis and make quicker decisions at the point of care
  • The potential to shorten hospital length of stay (LOS)
  • The potential to provide high accuracy rates
  • Reducing physician burnout for both radiologists and primary care physicians
  • The ability to tie in with monitoring apps and eliminate a problem before it becomes one through early detection
  • The ability to identify rare diseases without additional testing
  • The ability to identify ancillary findings that would possibly be overlooked

Let's examine just a few of these.

Any time you can get a second opinion to improve outcomes and patient care, you take it. This is especially important if it doesn't significantly increase costs. AI can be a great source of a second opinion for as little as $1 per study. That's great for studies that provide a higher reimbursement, such as ultrasound, CT, and MRI. Chest films, on the other hand, often cost more to read than the $9.30 reimbursement that Medicare pays. Thankfully, private insurers pay a bit more, although not that much more.

Will the added value of a second opinion be worth -- or at least offset -- the cost of the software? That remains to be seen.

Reducing readmissions

Back in 2011, there were 3.3 million hospital readmissions. To address that issue, Medicare developed a program called the Hospital Readmissions Reduction Program (HRRP), which penalizes hospitals for readmissions within 30 days. This includes hospital readmissions to any hospital, not just the hospital at which the patient was originally hospitalized.

Michael J. Cannavo.Michael J. Cannavo.

Medicare also uses an "all-cause" definition of readmission. This means that hospital stays within 30 days of a discharge from an initial hospitalization are considered readmissions, regardless of the reason for the readmission. At present, only six conditions -- four cardiac, one orthopedic, and one chest (pneumonia) -- are used to track hospital readmission rates.

Now, here is where the fear, uncertainty, and doubt (FUD) factor comes into play. Technically, a hospital can be assessed a penalty of up to 3% on a readmission. This means if you readmit 15% of your Medicare patients -- the national average in 2018 -- losing 3% of the reimbursement can add up. According to the Kaiser Family Foundation, though, nearly 80% of all readmissions in 2017 had either no penalty or less than a 1% penalty, and fewer than 2% take the full 3% hit. In fact, the average penalty stands at just over 0.75%. From a purely business standpoint, almost every facility I know would take a 0.75%, 1%, or even 3% financial hit any day to get another readmission and the money that accompanies it. So a 3% penalty is really no penalty at all.

It is important to note that total Medicare penalties assessed on hospitals for readmissions increased to $564 million in 2018, up just slightly from the prior year. Even though that sounds like a lot of money, these hospital fines are a drop in the bucket when contrasted with the $43 billion obtained on readmissions annually. According to data from the Center for Health Information and Analysis (CHIA), $26 billion was spent on Medicare hospital readmissions and about $17 billion was spent on avoidable hospital trips after discharge. Interestingly, this is only up slightly from the $41.3 billion paid for readmissions in 2011 back when the HRRP started.

While AI could be a major help, keep in mind there are other things that can be done to cut down on readmissions. These include providing patient education, coordinated care, and medication remediation, as well as addressing social determinants. The AI software developed at the University of Pittsburgh Medical Center -- as discussed in part 1 of this series -- addresses many of these areas as well. Unfortunately, that software's ROI can't be determined yet, as the product hasn't been commercialized.

Length of stay

Length of stay is interesting to assess, but there are simply too many other variables to determine whether or not AI can affect it. It can be a little touchy to claim that LOS can be reduced based on increased accuracy from AI algorithms. When you look at the studies that have been done, AI has usually been found to provide performance that's comparable -- but not superior -- to that of radiologists. In most cases, AI reports and the radiologist reports agree. But what happens if they don't? This varies by vendor.

Reporting of AI findings also varies. Some companies choose to send the AI results automatically to the PACS, and radiologists can then choose to use them in their report or ignore them. The report also can be sent to the referring doctor without going to the radiologist first -- if the software is set up that way.

Other firms offer multiple choices. Radiologists can send the AI report to the PACS or not, but they can't change their minds after the fact.

Other vendors allow the radiologist to have the final decision as to whether the AI report can be changed before it's included the final report. The report can be appended, but the AI report also stays with the final report. Medicolegal issues could result, however, if the radiologist's report says one thing and the AI-generated report says another, or if it adds data that were missing in the final signed report.

Regardless of whether the AI report is unappended, modified, or not included in the report, the radiologist still needs to review the AI results before deciding if they should be included in the final report. The study might also need to be reviewed again if AI detected something the radiologist may have missed or didn't see something the radiologist thought was important. All of this adds extra time per exam and impacts workflow. These implications are crucial to understand in an industry where time is money.

Regulatory matters

To date, less than one in four algorithms being commercially offered has received U.S. Food and Drug Administration (FDA) 510(k) clearance. Notably, the 510(k) process has been sped up by the FDA via its de novo pathway. This process supports the clearance of "novel, low- to moderate-risk devices for which general controls, or general and special controls, provide a reasonable assurance of safety and effectiveness, but for which there is no existing predicate to use in determination of substantial equivalence." This has not been without controversy, however, especially relative to sampling limitations, protocol decisions, and specific clinical limitations that need to be addressed.

Using the de novo pathway, the FDA in April 2018 approved a diabetic retinopathy AI algorithm that had data sampling on just 900 studies -- far below what many believe is a valid sampling number. Many AI algorithms used data sampling sets in the thousands, tens of thousands, and -- in some cases -- hundreds of thousands to increase both accuracy and specificity. Purists argue that the larger the amount of sampling data, the higher the AI accuracy. Others say increasing accuracy by one one-hundredth of a percentage point is overkill, especially when most AI algorithms rarely exceed 97% or 98% agreement with the radiologist's interpretation.

The FDA approved its first radiology AI algorithm in January 2017 and is now clearing them at a rate of about one per month. The FDA's planned software precertification program may also help increase the number of approved AI algorithms.

In the meantime, however, that still leaves a huge chunk of the marketplace without clearance. Interestingly, many AI vendors have indicated they do not plan on submitting a 510(k) application and, instead, will position their software as an adjunct to a radiologist's interpretation. This leads to a host of questions, from showing an ROI (unless the software is bundled for free with a PACS software upgrade) to medicolegal concerns.

More importantly, if a third-party AI algorithm will be added to PACS, it will need to go through the PACS vendor's validation process to ensure it doesn't negatively impact the existing PACS software. Depending on the company's road map, this process can easily take two years or longer if it's not an internal product. By that time, five new algorithms will have been offered in the marketplace for that specific anatomical area.

In part 3, I'll discuss how to choose the best AI for you, how to integrate AI into PACS, and how to determine where and when to use AI.

Michael J. Cannavo is known industry-wide as the PACSman. After several decades as an independent PACS consultant, he worked as both a strategic accounts manager and solutions architect with two major PACS vendors. He has now made it back safely from the dark side and is sharing his observations.

His healthcare consulting services for end users include PACS optimization services, system upgrade and proposal reviews, contract reviews, and other areas. The PACSman is also working with imaging and IT vendors developing market-focused messaging as well as sales training programs. He can be reached at [email protected] or by phone at 407-359-0191.

The comments and observations expressed are those of the author and do not necessarily reflect the opinions of AuntMinnie.com.

Page 1 of 371
Next Page