Good Metrics Gone Wrong
By Mark Jaben, MD
We all agree that if we can get our door-to-doc times (D2D) down to a certain level, patient satisfaction scores will improve. How then do you reconcile this belief against these actual figures from a multihospital group?
|Door to Doc (min)
|Top Box Score
The problem is that correlation does not mean causation. D2D may correlate with patient satisfaction, but is it the cause for patients being satisfied?
We all know that retrospective studies can only be hypothesis generating. Both D2D times and patient satisfaction scores are gathered retrospectively and reported well after the time of occurrence, making them incapable of illuminating cause and effect. In addition, both are affected by too many variables to be able to isolate specific causes. When we chase a correlation that is not a root cause, we risk altering work flows in a way that is counterproductive to what patients value. No patient comes to the ED to be seen; they come for the treatment and advice that will help them get better. When we incentivize staff to prioritize their work by focusing on a metric, we run the risk of unintended consequences.
Despite this, we use both to draw conclusions about the problems that prevent satisfactory results. D2D is at its best a process metric that reflects back end issues in the ED. That is because flow is a backend issue. Not convinced? Pour water into a funnel. Pretty soon you are pouring more than the outflow neck can handle. The funnel fills up and overflows. Now turn the funnel upside down. I challenge you to now cause the neck to overflow.
D2D is a sensitive indicator of capacity problems and department overload. When used in real time, it can alert staff who then must engage in problem solving to identify the cause at that time and then institute responses to get caught up. Process goals, like patient satisfaction, can help us understand when there is a problem, but when collected in small sample sizes and reported well after the fact, they lose the ability to help. We cannot know the particular circumstances that were at play at that time, making it impossible to do the in-depth problem solving required to determine cause and effect.
We make a mistake when we use process measures, like D2D, as a goal; it diverts our attention away from searching for the real causes. We make a mistake when we use a broad based performance measure, like patient satisfaction, to evaluate a process. We really court disaster when we use either as the basis to evaluate an individual's performance. There are just too many variables at play to isolate cause, and we mistakenly make correlations without causation. The resulting loss of credibility and trust often derails even the best-intentioned improvement efforts.
We must distinguish marketing/reporting metrics from improvement metrics. We want to put our best foot forward, but that doesn't mean they are necessarily the 'right' metrics to focus on for improvement.
Back to Newsletter