Saturday, April 21, 2012

Unethical and shameful behavior at the CDC

At the recent Health Care Quality Summit in Saskatoon, Sarah Patterson, the Virgina Mason Medical Center expert on Lean process improvement, noted,  "I'd rather have no board rather than an out-of-date board. They have to be real."  She was referring to the PeopleLink Board that is placed is key locations in her hospital to provide real-time visual cues to front-line staff as to how they are doing in meeting quality, safety, work flow, and other metrics in the hospital.

Now comes the CDC, announcing in April 2012, that 21 states had significant decreases in central line-associated bloodstream infections between 2009 and 2010.

CDC Director Thomas R. Frieden, said “CDC’s National Healthcare Safety Network is a critical tool for states to do prevention work. Once a state knows where problems lie, it can better assist facilities in correcting the issue and protecting patients.”

I am trying to be positive when progress is made, and I am also trying to be respectful of our public officials -- whom I know to be dedicated and well-intentioned -- but does Dr. Frieden really believe that posting data from 2009 and 2010 has a whit of value in helping hospitals reduce their rate of infections?

Try to imagine how you as a clinical leader, a hospital administrator, a nurse, a doctor, a resident, or a member of the board of trustees would use such data.  Answer:  You cannot because there is not use whatsoever.

I am also perturbed by the CDC's insistence on using a "standardized infection ratio" as opposed to a simple count of infections or rate of infections per thousand patient days.

Here's what the agency's metric means:

The SIRs represent comparisons of observed HAI occurrence during each distinct reporting period with the predicted occurrence based on the rates of infections among all facilities adjusting for key covariates (referent population).

The referent period remained January 2006 through December 2008, as in previous SIR reports.

The CLABSI and CAUTI SIRs are adjusted for patient mix by type of patient care location, hospital affiliation with a medical school, and bed size of the patient care location.

Affiliation with a medical school!  Wait, do you get a bye from this statistic if you are not affiliated with a medical school . . . or if you are? Why on earth should that matter when the issue is the use of a well established protocol to avoid central line infections?

So, the bad news is that CDC data from 2009 and 2010 is too old to be useful.  The good news is that the methodology chosen for reporting the data is meaningless.  The "predicted occurrence" is basically a benchmark based on a period of time in which central line infections were an epidemic in the country.

Jim Easton, from the NHS, put it well at the Saskatoon conference:

We need to improve ourselves as leaders:  Be intolerant of mediocrity, to hate it. Reject normative levels of harm.  It is not OK to be in the middle of the distribution of the number of people we are killing.

A friend of mine, working in a Midwest ICU, read Jim's comment and said,

It's not morally ok.  But it is, unfortunately, accepted as "reasonable."

Catherine Carson, Director, Quality & Patient Safety at Daughters of Charity Health System, put it this way a few weeks ago on a safety and quality litserv:

When the goal is zero – as in zero hospital-acquired infections, or falls – why seek a benchmark? A benchmark would then send the message  - that in comparison to X, our current performance level is okay, which is a false message when the goal of harm is zero.
 
Jim Easton reinforces Sarah Patterson's point by saying:

It is shameful not to share clinical quality information.  We have ethical obligation to share information about how well the health care system is performing.

Maura Davies, the CEO of the Saskatoon Health Region (seen here with the province's Minister of Health Don McMorris), summarized this for her staff in an email after the summit:

As we embrace Lean as the foundation of our management system, we are learning that when it comes to safety, there are only two numbers that matter: zero and one hundred. We should settle for nothing less than zero harm to patients or staff. We should expect 100 per cent compliance with the standards and evidence based practices we have adopted, such as the surgical checklist, hand hygiene and falls prevention. Are we up to these challenges?

I wonder if Dr. Frieden understands that his agency's policy with regard to this kind of information is, fundamentally, unethical and, indeed, shameful.

9 comments:

  1. The SIR is ridiculous. Any ideas why it's reported this way? In our hospital, we use the CDC definition of CLABSI an rate per 1000 line days, as recommended by the CDC. It's more work to count the number of line days but its sooooo worth it. Weird that the CDC doesn't follow its own advice.

    ReplyDelete
  2. The Standardized Infection Ratio smells to me like a construction developed after some lobbying by those institutions that constantly use the 'my patients are sicker' excuse. The CDC should know better. Don't cave to obstructionists.

    nonlocal MD

    ReplyDelete
  3. Michael BennettApril 23, 2012 8:19 AM

    Paul Levy's blog post highlights what consumer advocates have known for years: public health leadership is sorely lacking because it is influenced by the very industry it is supposed to be monitoring. In addition to Mr. Levy's comments about the metrics used to determine CLABSIs, etc., it should be pointed out that CDC's conclusions are arrived at by comparing NNISS data to the current NHSN system. NNISS data came from approximately 300 hospitals confidentially and voluntarily reporting to the CDC (and therefore probably relatively accurate), whereas NHSN data is coming from some 4000 hospitals largely reporting because of legislative mandates (and therefore woefully underreporting). This type of shell game does nothing to promote safe healthcare and in fact only serves to further enable a dysfunctional and therefore too often harmful culture. Significant change in the nation's healthcare system will take place only when true leadership emerges.

    Michael Bennett
    President
    The Coalition For Patients' Rights

    ReplyDelete
  4. Thank you Paul for once again showing the CDC needs to get on board the train or get off the platform right along with leadership.

    Patty Skolnik
    Executive Diretor
    Citizens for Patient Safety

    ReplyDelete
  5. Catherine CarsonApril 23, 2012 5:28 PM

    The focus on hospital-acquired infections and the need for transparency from hospitals has led many states to mandate public reporting of HAIs. Then the need was for a standardized reporting tool, for which the CDC promoted the National Safety Healthcare Network (NHSN). The unintended consequence of these pressures is a dramatic increase in the number of facilities reporting to NHSN, from 3,000 in 2010 to what is estimated to be 16,500 in 2013 to fulfill HAI reporting requirements. NHSN capacity is strained to the max while congressional funding has been flat since 2010. This tool will fail soon if funds are not allocated to modernize the NHSN information technology platform to accommodate the electronic data collection, and to improve the NHSN HAI reporting capabilities by facility and infection type so that the data can be turned into useful information for improvement.

    Catherine Carson, BSN, MPA, CPHQ
    Director, Quality & Patient Safety
    Daughters of Charity Health System

    ReplyDelete
  6. I found this post and some of the comments disturbing. As a hospital epidemiologist with 20 years of experience, I find the SIR to be a valuable metric. It does not preclude the reporting of infection rates, and in fact, to calculate the SIR you need both the number of infections and the device days that you would use to calculate the infection rate. However, the SIR and the infection rate have different utilities. For example, to the director of an ICU, device-associated infection rates trended over time are probably more valuable. However, to the hospital administrator and to the consumer, the SIR is generally more valuable. At my hospital my group produces over 80 different infection rates per quarter. Suppose my CEO asks me to tell him how are we doing with regards to infections. I could send him all those rates and let him sort through them but even though I do this for a living I can’t possibly look at all those numbers and distill out how we are doing hospital-wide. The SIR allows you to develop a global metric, by infection type, by hospital unit, and it even allows you to combine different types of infections. So each ICU or hospital ward can have an SIR, and the SIR can even be calculated for all infections hospital-wide in a single number. I think that’s powerful. It’s also more intuitive for the consumer (e.g., it's easier for the consumer to comprehend the infection rate is 20% higher than expected, rather than the hospital has a rate of 3/1000 line days). Moreover, comparing raw infection rates across hospitals can be misleading. For example, simply comparing the bloodstream infection rate for hospitals (ie, taking all bloodstream infections in the hospital and dividing by all line days), could lead the consumer to make incorrect assessments regarding hospital quality since a rate of 2/1,000 catheter days in an academic medical center is most likely a better performance than a rate of 1/1,000 catheter days at a hospital that does not provide tertiary care. Because the SIR allows for much better risk adjustment in its calculation, the hospital with the lower rate may actually have a higher SIR (worse performance). The SIR gets us closer to comparing apples to apples.

    Lastly, to provide a bit of history: the only reason we can even discuss healthcare associated infection rates is because of CDC. The entire surveillance methodology for HAIs, including the infection definitions, has been developed by CDC over the past 40 years. CDC invested in this area long before any other organization, and before the public or hospital administrators or even most infectious diseases physicians had any interest in the area. The people who work in the hospital infections branch at CDC do great work and get little credit. So the comments in the post were, in my opinion, offensive and disrespectful.

    Michael Edmond, MD, MPH, MPA

    ReplyDelete
  7. The CDC does marvelous work, as you note, and deserves credit for that. But this use of ancient data based on a SIR methodology that is clearly flawed is not an example of that. Contrary to your comment, there is nothing useful to a hospital administrator, or indeed, the clinical leadership in the report. While it might be interesting in some epidemiological settings, it provides no actionable information. In that sense, the CDC report offers no value to the efforts to reduce harm in hospitals. The report is a lost opportunity by the agency to make a difference in an area in which people are being preventably killed every day. It is for that reason that I hold firm to my characterization.

    ReplyDelete
  8. The SIR can be calculated with the most recent CDC data. If your hospital is an NHSN hospital, you have the data to calculate the SIR with the current CDC rates. Inside any given hospital, the SIR shouldn't replace the infection rates, but if you have an understanding of the SIR, it is really quite useful. It allows you to look at the same data in a different way. At my hospital, we report both the SIR and the rates and these are posted in each ICU and at the employee entrance to the hospital.

    ReplyDelete
  9. Michael BennettMay 06, 2012 5:27 PM

    Not 40 years but more than 50 years of continuously soaring infection rates undermines any argument about CDC as well as hospital leadership on this issue. There are cemeteries bulging with the graves of victims.

    ReplyDelete