Wednesday, February 08, 2012

When is a comparison not a comparison?

I guess we shouldn't be surprised when a website called "Hospital Compare" chooses to adopt a methodology that presents comparisons, but leaves us with less-than-useful information.  According to Modern Healthcare:

The CMS' Hospital Compare website has been updated with facility-specific data on central line-associated bloodstream infections, a move the agency says will “hold hospitals accountable for bringing down these rates, saving thousands of lives and millions of dollars.

Here is the approach used by CMS:

The Central Line Associated Blood Stream Infections (CLABSI) Score is reported using a Standardized Infection Ratio (SIR). This calculation compares the number of central line infections in a hospital’s intensive care unit to a national benchmark based on data reported to NHSN from 2006 – 2008. The result is adjusted based on certain factors such as the type and size of a hospital or ICU.
  • A score of less than 1 means that the hospital had fewer CLABSIs than hospitals of similar type and size.
  • A score of 1 means the hospital's CLABSI score was no different than hospitals of similar type and size.
  • A score of more than 1 means the hospital had more CLABSIs than hospitals of similar type and size.
What's wrong?  The hospital data presented "reflect experiences of patients treated between October 2008 and June 2010," an untimeliness that would render them useless if were we interested in comparisons.  But, what is this business about benchmarking to a period of time four to six years ago, an ancient era in the world of hospital safety and quality?  Further, why are the data also adjusted based on the type and size of a hospital?  These three flaws act together to present an untrue impression of precision and relevance.

If there was ever a metric that did not need a benchmark or an adjustment, it is the rate of central line infections.  The target for this metric should be zero.  As noted here by Jason Hickok, a registered nurse who helps direct patient safety and infection prevention at the Hospital Corporation of America: “We are making sure that people understand that zero is our goal.  It is a permanent culture change that zero is our target.”  The numbers reported on the CMS site should be the raw rate, the number of cases per thousand patient days.

But, why are we comparing hospitals to one another?  The more valuable presentation would be one that showed the trend for each hospital, comparing its performance against its own previous performance.  We don't have to wait several years for data to arrive.  Each hospital knows its infection rate and calculates it in real time, every month.  If these numbers were presented, consumers could see if the hospital was getting better and sustaining its gains.  That would tell you more about a hospital than a flawed comparison presenting a useless metric based on a faulty methodology.

10 comments:

  1. Marsha Lovejoy; Manager, Patient Advocacy - Cook MedicalFebruary 09, 2012 11:44 AM

    Our true competition should be disease, not competing hospitals. In this case, we’re dealing with a CMS defined ‘never event’ that takes 53 American lives every day (1). We must focus our time on prevention methods that can save lives while reducing healthcare costs.

    Zero is the goal. Let’s not make it more complicated.

    (1) Centers for Disease Control and Prevention (CDC). Vital signs: central-line associated blood stream infections—United States, 2001, 2008, and 2009. MMWR Morb Mortal Wkly Rep. 2011;60(8):243-248.

    ReplyDelete
  2. Never mind the fact that the data represent performance of the intensive care unit. If I needed to head on down to the ICU, I wouldn't be spending 10 minutes before hand to check which one in my area had the lowest infection rates - I'd likely go to the one that the ambulance took me to.

    ReplyDelete
  3. Thanks for the call out and linking to the National Journal article. Enjoyed your post. Agree with @Marsha that reducing infections is the real focus/competition here. This is giving me an idea for a post on the HCA blog about our AIM for Zero program. Thanks Paul!

    ReplyDelete
  4. Thank you for this post, but I respectfully disagree with the analysis. Zero is a commendable goal, but not always reachable. Even in the landmark NEJM paper the mean reached was above zero (yes, the median was zero, but this is not a fully representative metric). Severity of illness matters, the type of patient mix matters, and no, we cannot always control whether or not a particular patient develops a HAI. I know this is heresy, but it is also reality. Disease is an interaction of the patient, the condition and the care that is delivered. The only thing that can be controlled is the care delivery, and this should be the goal. This race to "eliminate" some of the complications that are not necessarily under our control is pernicious and, yes, dangerous. Everyone wants to do better, but we need to inject some reason into the discussion.

    ReplyDelete
  5. I am so glad you wrote.

    I have never seen evidence that severity of illness matters with regard to central line infections. Ditto for patient mix.

    Why is a zero goal pernicious or dangerous? Do you have evidence that places that have set a zero goal have inadvertently caused more harm? You really need to defend that comment. What goal would you set that is intellectually defensible?

    Are you also suggesting that my critique of the data on the site is flawed? That, after all, was the main point.

    Others may want to comment.

    ReplyDelete
  6. Paul, thank you for your response.
    There is really very little doubt that patient mix matters when it comes to HAI, CLABSI included. If you look at the NHSN data broken down by type of unit, you see mean incidence ranging from mid 1s per 1,000 days to mid 5s peer 1,000 days. This means that the type of the patient that is common in a particular unit matters.
    I agree with you that we should have real time data, and a lag of years is unacceptable.
    Now, the intellectually defensible goal is a much tougher question, of course. I vacillate between thinking that we should be measuring adherence to the process measures (the CL checklist, for example), given that that is the only thing that can be controlled, and realizing that we are not even sure how well this correlates to the desirable outcome. So, for the moment, while quality science improves in quality, perhaps the process measures are the best we can do.
    The pernicious nature of focusing on zero is admittedly theoretical at this point, though I have not yet had the chance to explore the literature on it. One way to get rid of a problem is to "disappear" it. And once it disappears or becomes something else, it becomes that much more difficult to study and eradicate.
    My other objection is that if you look at the Landrigan paper in NEJM, all of the chasing zero has not amounted to improved safety in our hospitals. So, perhaps there are complementary approaches that need to be explored.

    ReplyDelete
  7. Dr. Zilberberg;

    I am familiar with your writings and have always appreciated your voice of reason in discussing medical issues. However, in this case I am interested in your statement:

    " This race to "eliminate" some of the complications that are not necessarily under our control is pernicious and, yes, dangerous."

    I suspect you are worried that this goal of zero will become politically solidified in the lay press and anything less will be regarded as malpractice or some conspiracy on the part of providers, no? I see your point, but I am afraid that setting incremental goals only results in incremental improvements. If we say we want to reduce HAI's by 50% and we achieve 50%, then who knows how much further we might have gone if we had been more ambitious? And how do we then encourage staff to go further - the response would be - "I met your goal, and now you want MORE?"
    I think you must share my own surprise at just how huge a reduction has been achieved in such decades-long problems as CLABSI and VAP by just doing, conscientiously and consistently, the simple, low-tech things that work and - most important - having a goal of zero. No, zero was not attained in every case, but the goal that was attained was far beyond expectations.
    That is why, in my opinion, one must set a goal of zero, and then explain the quite reasonable rationale behind such a goal.

    nonlocal MD

    ReplyDelete
  8. Paul,

    It is useful for clinicians to have information about whether a particular facility is making progress on quality measures, but purchasers want to know each facility's relative performance and value. All the good intentions in the world don't matter if the facility I ultimately contract with delivers poorer results at higher cost.

    ReplyDelete
  9. Paul et al,

    Thank you for this interesting conversation - within the context of the various perspectives here, I'm wondering if any of you have seen or have comments on the Massachusetts DPH's new healthcare associated infection report released this past week?

    http://www.mass.gov/eohhs/docs/dph/quality/healthcare/hai/hai-report-2009-2011.pdf

    Massachusetts, like CMS, uses SIRs, but similar to Paul's suggestion (albeit not monthly or even quarterly data), the DPH included hospital-specific fact sheets that seem to be intended to look at individual hospitals' change over time. While not perfect, the data are updated through 6/30/11. Thoughts on this? Also, to Dr. Zilberberg's point about process measures, in a world of increased public reporting, do you think there is merit to reporting on such process measures as CL checklists?

    local MD

    ReplyDelete
  10. If Dr. Pronovost likes it, I'm in:
    http://thehealthcareblog.com/blog/2012/02/15/to-gauge-hospital-quality-patients-deserve-more-outcome-measures/

    nonlocal

    ReplyDelete