Most MIPS quality measures 'topped out' based on a fraction of physicians

Most Merit-Based Incentive Payment System (MIPS) quality measures defined as "topped out" by the Centers for Medicare & Medicaid Services (CMS) -- including some that affect radiology -- have been reported by a fraction of clinicians, a study has found.

The research raises questions about CMS's "cap-and-removal" policy, according to a team led by YoonKyung Chung, PhD, of the Harvey L. Neiman Health Policy Institute (HPI) in Reston, VA. The group's work was published April 13 in Health Affairs Scholar.

"Our findings show that many MIPS [Merit-Based Incentive Payment System] quality measures were deemed topped out based on performance reported by a relatively small subset of eligible physicians," Chung said in an HPI statement. "This raises concerns about whether topped-out designations accurately reflect quality performance across the broader clinician population."

CMS designates MIPS measures as "'topped out'" when reporting clinicians consistently achieve high performance, resulting in potential scoring caps and eventual removal," the investigators noted. They examined the effects of this framework via a study that included 643,558 physicians across 37 specialties who reported 275 MIPS quality measures between 2017 and 2023.

They reported the following:

  • Of the total measures, 137 (49%) were designated as topped out.
  • More than half (51%) of topped-out measures were reported by fewer than 5% of physicians in relevant specialties at the time of their topped-out designation; only 11 topped-out measures were reported by a majority of physicians.
  • The median reporting rate across all topped-out measures was 7.1%.

The study also found variation across specialties in reporting rates, the availability of quality measures, and the proportion designated as topped out. For example, in more than 70% of specialties, over half of available specialty-relevant measures were topped out by 2023, leaving physicians with limited options to achieve maximum quality scores using measures relevant to their practice. And several specialties -- including diagnostic radiology and radiation oncology -- had very few specialty-relevant measures remaining that were eligible for full scoring, according to Chung's team.

"The premature removal of measures with low reporting rates may limit opportunities for quality improvement among clinicians who have not previously reported them, discourage maintenance of quality among reporting clinicians, and incur unnecessary cost to developing replacement measures," the group noted in the statement.

"When few full-score, specialty-relevant measures are available, it undermines the ability of MIPS to meaningfully compare performance and incentivize improvement in areas most relevant to patient care," said study coauthor Lauren Nicola, MD, of Triad Radiology Associates in Winston Salem, NC. "Alternative approaches -- such as applying topped-out and cap-and-removal policies at the clinician or entity level rather than universally -- could preserve incentives for continued quality improvement while maintaining flexibility within the program."

Access the full study here.

Page 1 of 1216
Next Page