May 13, 2021

Fair Play for Reimbursement? CEDR Data makes it possible.

If you’ve ever thought that the throughput metrics used to incentivize re-imbursement are not applicable to your particular department, you’re not alone. Recently, Drs Arjun Venkatesh and Shashank Ravi, along with their colleagues, published Fair Play: Application of Normalized Scoring to Emergency Department Throughput Quality Measures in a National Registry in the Annals of Emergency Medicine [https://doi.org/10.1016/j.annemergmed.2020.10.021]. This is the first study that leveraged the CEDR dataset, and as such it represents a huge milestone in and of itself. More importantly, it shows promise that the data CEDR clients submit could be used to alter the reimbursement landscape of the future. We got a chance to interview the authors to find out more about this exciting paper.

 

Greg Neyman: Can you explain the problem of unfairness you were trying to remedy?

Arjun Venkatesh & Shashank Ravi: Before CEDR designed and implemented the new scoring approach, measuring ED length of stay for CMS was crude at best. Many ED groups were limited to reporting a score that was a simple “catch all” from the multiple EDs they staffed and was compared to other groups that may staff much larger or smaller EDs. The result was that ED throughput scores were not often an apples-to-apples comparison. There are many things that can impact throughput from one ED to the next even beyond the mix of patients including internal workflows, testing and treatment patterns, and staffing models as well as external issues like boarding or access to follow-up. Interestingly, prior research has shown that annual ED visit volume is correlated with many of these factors and a good way to compare EDs. However, few ED groups just staff one ED anymore, so it is hard to compare ED groups that staff multiple EDs of different sizes. Our goal was to make comparison more “fair” by taking into account both ED size as well as the fact that groups often staff more than one ED.

GN: How is it superior to raw metrics?

AV & SR: Raw measures can be misleading—just imagine an ED group staffing a couple 20K visit a year EDs has anED length of stay of 120 minutes on average while another ED group averages 120 minutes at their 20K ED and 200 minutes at their 80K ED. Raw metrics would make the 2nd group look much worse. Our creation of a normalized scoring approach ensures a more balanced comparison of emergency departments of different characteristics in a single number that CMS can use. 

GN: Can a medical director use your conclusions to figure out if their department(s) are being unfairly penalized by CMS metrics?

AV & SR: Tough question – especially since there probably isn’t a medical director out there that has ever felt “fairly penalized”. In reality, medical directors should continue to use their actual scores in minutes from the CEDR dashboard to benchmark performance for local QI and operational purposes, while the weighted Z-score is really for CMS reporting purposes. Medical directors should feel reassured that the scoring approach used by CEDR now accounts for differences in group size and ED volumes, so their ultimate CMS score is a better representation of the Eds which they staff.

GN: How can medical directors implement this information in their metrics?

AV & SR: For day-to-day management, medical directors don’t need to incorporate this information into QI initiatives or discussion. However, if your group performs well on these metrics, then featuring that success to hospital partners may reap downstream resource and benefits.

GN: How can we anticipate this will affect payments in the future?

AV & SR: I think we can expect payments to be more equitable in the future. This work shows that hospital comparison for metrics such as throughput can be standardized to account for group size and site volume. so ED groups that may staff more challenging EDs should feel that work is better captured in the metrics. Also, by adding the score adjustment into the measure, CEDR is able to ensure that CMS considers the ED throughput measure both as an “outcome” measure as well as an end-to-end electronic measure, both of which are given favorable scoring. At the end of the day, the CMS MIPS program is a zero-sum game, and making metrics that are well positioned to stay in the MIPS program and receive bonus points will serve the specialty well in the long-run.

GN: How did you find accessing and working with the dataset?

AV & SR: As the first peer reviewed manuscript in publication using CEDR data, there definitely was a learning curve in understanding the data provided. We also had to understand the limitations of the data, especially around outliers. We were lucky to have a great data analyst as part of our team who allowed us to take the dataset provided and turn it into a usable format. Ultimately, the ability to aggregate data from EHRs with tools like CEDR holds tremendous promise for making healthcare delivery more data-driven, but we still have a lot of work to do in making those data channels flow, standardizing data for analysis, and, most importantly, making the results interpretable to clinicians and leaders.