Tuesday, January 22, 2013

Artless rather than artful variability

There is a prevailing view among skeptical observers of patient safety reporting that doctors and nurses will intentionally skew results about things like central venous catheter bloodstream infections ("CVC-BSIs") to portray improvement in their hospital's performance.  A recent paper by Mary Dixon-Woods and others in the Millbank Quarterly puts the lie to that assertion.*  The authors conducted an ethnographic study of infection data reported to a patient safety program.  After many hours of observation and telephone interviews involving 17 ICUs in the UK, here's what they found:

Variability was evident within and between ICUs in how they applied inclusion and exclusion criteria for the program, the data collection systems they established, practices in sending blood samples for analysis, microbiological support and laboratory techniques, and procedures for collecting and compiling data on possible infections. Those making decisions about what to report were not making decisions about the same things, nor were they making decisions in the same way. Rather than providing objective and clear criteria, the definitions for classifying infections used were seen as subjective, messy, and admitting the possibility of unfairness. Reported infection rates reflected localized interpretations rather than a standardized dataset across all ICUs. Variability arose not because of wily workers deliberately concealing, obscuring, or deceiving but because counting was as much a social practice as a technical practice.

Conclusions: Rather than objective measures of incidence, differences in reported infection rates may reflect, at least to some extent, underlying social practices in data collection and reporting and variations in clinical practice. The variability we identified was largely artless rather than artful: currently dominant assumptions of gaming as responses to performance measures do not properly account for how categories and classifications operate in the pragmatic conduct of health care.

What are we to make of this?  I suppose we should feel good that clinicians are not intentionally skewing reported results about infection control.  But we should not feel so good that there is such large variability in the collection of data, even if it is "artless."  That variability suggests that the application of financial penalties and incentives is likely to be misapplied.  The authors address this point directly:

Before CVC-BSIs were used as a performance measure, the data noise associated with the CA-BSI definition was of little consequence, and could be resolved locally. Rates based on this definition could be used by organizations to detect trends over time as long as they were internally consistent in their counting practices. The current use of these rates for performance measurement, pay-for-performance, and reputational sanctions, however, has converted a locally useful definition into a means of scrutiny and control, and could undermine its value for any purpose, as well as risking unfairness. The fallibilities of data collection and reporting systems also have important consequences for improvement efforts: poor practices may be reinforced; improvements may not be rewarded; or the search for cases may be less aggressive.

Our study also has important implications for current policies of classing a CVC-BSI as a “never event.” If the data produced by different settings are not comparable, then “getting to zero,” the standard implied by most targets and standards in the United States and elsewhere, may not always be possible for all units. The relationship between catheter care and infection outcomes may not be as stable as the current policy assumes.
---
*  Many thanks to Mike Davidge, Head of Measurement, Senior Improvement Advisor at the NHS Institute for Innovation and Improvement, for letting me know about this paper.

No comments:

Post a Comment