Sunday, August 22, 2010

My bad idea

A couple of years ago, I suggested:

Why don't the insurers in Massachusetts require the hospitals here to report their HSMRs (hospital standardized mortality rates) -- in private, with no publicity -- to them, the insurers, as a condition of being in the payers' networks?... [I]f the results are out of whack with industry norms, or otherwise indicate quality or safety problems, the insurers could then require remediation plans to remain in good standing.

Dumb idea, it turns out. As Liz Kowalczyk reports in the Boston Globe, an expert panel that has been studying the measurement of hospital mortality rates has found that the "current methodology for calculating hospital-wide mortality rates is so flawed that officials do not believe it would be useful to hospitals and patients."

Researchers evaluated software of four companies for measuring hospital mortality. "The problem was that researchers came out with vastly different results when they used the various methodologies to calculate hospital mortality between 2004 and 2007 in Massachusetts, and they could not tell which company's results -- or if any -- were accurate."

Our hospital's head of health care quality, Dr. Ken Sands, was on the panel. He is quoted in the story as saying:

"In every year there were at least a couple of hospitals ranked as having low mortality with one vendor, and high mortality with another. That hospital could either be eviscerated or rewarded depending on which vendor you choose."

Fortunately, there are other metrics that can reliably measure aspects of the quality and safety of hospitals. Death will just have to wait.

12 comments:

  1. There are at times calls for increased transparency of hospital statistics with the public as well.

    What do you think about this?

    I worry that the public may not know how to interpret them well enough and they could become misconstrued and unfairly benefit or penalize various hospitals.

    Would you agree or disagree with this?

    Chris LeBeau
    http://www.chrislebeau.com/blog/

    ReplyDelete
  2. Paul
    Check this out from BMJ, April 2010 by Pronovost:
    http://www.bmj.com/cgi/content/extract/340/apr19_2/c2016

    brad

    ReplyDelete
  3. From Facebook:

    Nancy:I don't think your idea was a bad one.

    Toni: That is worrisome. This is the first I have heard that the metrics are not valid for a publicly reported measure.

    Ellen: I don't understand. How hard can it be to determine how many people have died in your hospital?

    Me: The issue, Ellen, is to create an index that means something relative to other hospitals. Of course, you can count how many die, but that does not tell you how many SHOULD have died, relative to an appropriate standard of care.

    ReplyDelete
  4. Good for you, Paul; for bringing this to our attention. I found the BMJ article cited by Brad to be most interesting (btw, I magically got the entire article by simply clicking on 'Full Text' even though I am not a subscriber), especially the following:

    ".... differences in the quality of care within hospitals are much greater than differences between hospitals. This finding does not support the prevailing notion of large scale systematic differences in quality at the institutional level and suggests that while commercial organisations such as Enron fail corporately, hospitals are more likely to fail on the specifics—pathology in Liverpool; paediatric cardiac surgery in Bristol; radiation therapy in Missouri."

    This suggests that procedure- or condition-specific mortality rates may still have some validity.

    However, the article then goes on to advocate process metrics. My problem with that is that some process metrics have been shown to have little correlation to outcomes.

    Lest we (some, gleefully) throw up our hands and stop measuring anything, I think more funding should be extended to improve the science sooner rather than later; and we should still measure procedure or condition-specific metrics to the best of their validity in the meantime.

    nonlocal MD

    ReplyDelete
  5. Chris,

    I think the main value of transparency is to help a hospital hold itself accountable to the standard of care it espouses.

    There is little indication, thus far, that outcome metrics and process metrics are used by the public in making decisions about where to receive treatment.

    That being said, I do not share your concern about an inability of the public to understand such metrics. People who are sick spend lots of time learning about their diseases, and they show a remarkable sophistication about things related to medicine. To the extent they choose to follow the public metrics that exist, it will help them be better consumers and partners in the delivery of care.

    ReplyDelete
  6. Brad,

    Thanks for the cite. Right on target!

    ReplyDelete
  7. My thoughts, in a word: ARG!

    I mean, ARG!

    Honest question: is anyone asking "How in hockeysticks could somebody have written such bad software, and how could smart managers have bought it?" Honestly, did people ask "Is this software programmed to think sensibly?"

    Honestly - how could this be? Are managers in this industry just now learning to think critically?

    And I wonder how much it costs to buy these software systems.

    From a quality improvement perspective, it's easy to see how quality improvement would be difficult if there are such vast definitions of what quality is.

    Yes, it's a good thing there are other measures. But my gosh, *death* seems like a pretty important metric.

    ReplyDelete
  8. Dave,

    Here's a bit more from Ken's summary presentation that might help explain it better:

    Although the 4 products were all developed to measure overall hospital risk-standardized mortality, they varied substantially in design and methodology. Important areas of differences included:
    The population used to develop the models; the specific patients (e.g. palliative care or DNR), diagnoses (advanced malignancies), and hospital types (e.g. specialty) that were included and excluded from the analyses; the type of statistical model; the covariates included in the models; the methods used for differentiating complications from co-morbidities (in the absence of a “present on admission” indicator); methods for evaluating model fit; and measures of statistical uncertainty.
    Given these marked differences in model construction, it would be anticipated that the results from these models might vary substantially, even when applied to the same study cohort.

    All of the models left certain important methodological issues unaddressed.

    Finally, all the models had some features that the researchers believe were problematic, such as the inclusion of procedures and socio-demographic status as adjustment factors, neither of which would be appropriate to include when evaluating quality of care

    ReplyDelete
  9. Paul, the additional data from Dr. Sands helps considerably, but raises the question, who participated in the development of this software? (a rhetorical question, btw). If they did it like the vendors do hospital EMR's, they probably failed to ask the people who might have made it a better product.

    The BMJ article by Pronovost also had a good explanation for the poor performance of deaths as an indicator - an unfavorable signal-to-noise ratio - e.g. preventable deaths vs. overall deaths.

    nonlocal

    ReplyDelete
  10. I think the signal-to-noise ratio is the inherent problem. I don't think this is a programming issue. I just don't think you can design a precise enough algorithm.

    ReplyDelete
  11. It will always be very difficult to compare outcomes between two facilities (for the reasons already mentioned). A better approach would be to grade/compensate facilities for adhering to care management processes (CPMs). Providers and facilities generally have much more control over their processes than the ultimate outcome.

    ReplyDelete
  12. Paul, Are we dancing with this measure because we lack bravado to demand accountability for more specific measures? Clinical standards for a host of services are available, but they would direct attention to specific providers. Is that a more difficult thing, politically, for hospitals to do? Are we timid in our transparency because some stakeholders (i.e. physicians) are not on board?

    ReplyDelete