Wednesday, February 01, 2012

More thoughts on having less benchmarking

Following up on Catherine Carson's comment on the inadvisability of using benchmarks for certain patient safety goals, another person on the the National Patient Safety Foundation's listserv pointed out that the Institute for Safe Medication Practices has always maintained a position against benchmarking for medication errors.  Here are excerpts from the ISMP website, responding to the questions, "What is the national medication error rate? What standards are available for benchmarking?"

A national or other regional medication error rate does not exist. It is not possible to establish a national medication error rate or set a benchmark for medication error rates. Each hospital or organization is different. The rates that are tracked are a measure of the number of reports at a given institution not the actual number of events or the quality of the care given. Most systems for measuring medication errors rely on voluntary reporting of errors and near-miss events. Studies have shown that even in good systems, voluntary reporting only captures the "tip of the iceberg." For this reason, counting reported errors yields limited information about how safe a medication-use process actually is. It is very possible that an institution with a good reporting system, and thus what appears to be a high error "rate," may have a safer system.

In addition, on June 11, 2002, the National Coordinating Council for Medication Error Reporting and Prevention published a statement refuting the use of medication error rates, [asserting that] the "Use of medication error rates to compare health care organizations is of no value." The Council has taken this position for the following reasons:
  • Differences in culture among healthcare organizations can lead to significant differences in the level of reporting of medication errors.
  • Differences in the definition of a medication error among healthcare organizations can lead to significant differences in the reporting and classification of medication errors.
  • Differences in the patient populations served by various healthcare organizations can lead to significant differences in the number and severity of medication errors occurring among organizations.
  • Differences in the type(s) of reporting and detection systems for medication errors among healthcare organizations can lead to significant differences in the number of medication errors recorded.
According to the statement, the Council believes that there are no acceptable incidence rates for medication errors. The goal of every healthcare organization should be to continually improve systems to prevent harm to patients due to medication errors. Healthcare organizations should monitor actual and potential medication errors that occur within their organization, and investigate the root cause of errors with the goal of identifying ways to improve the medication-use system to prevent future errors and potential patient harm. The value of medication error reporting and other data gathering strategies is to provide the information that allows an organization to identify weaknesses in its medication-use system and to apply lessons learned to improve the system. The sheer number of error reports is less important than the quality of the information collected in the reports, the healthcare organization's analysis of the information, and its actions to improve the system to prevent harm to patients.


Anonymous said...

Is this a throwing-up-of-hands or bowing-to-pressure?

I hope that they followed up with recommendations for rapid improvement. There are technologies for greatly improved reporting, and others have shown that institutional cultures can be dramatically shifted with concerted, determined leadership. Patient populations are stratified and compared everyday in the budget office, and clarity in definition? Please. This follows a will to act.

There is no need to reinvent the wheel - solutions to each of these have been in the literature for years. The trite ending concerning patient harm and 'lessons learned' is insulting, if I read the rest to say 'it is just too complicated to know.' Well, do a better job of finding out.

Anonymous said...

I can understand why they would take that position (that there can be no 'benchmark'), but one wonders if there is a number or rate that can be viewed as clearly an outlier in the negative direction. In other words, an 'emergency' number, where the organization is obviously way behind everyone else and needs to take immediate and systemic action.


jonmcrawford said...

I am not following. We don't want to use benchmarks, because they are voluntarily reported? So we throw the baby out with the bathwater? What is the option replacing the use of benchmarks? (without saying that people need to be honest, which is the same as not saying anything)

If we need a benchmark that can be relied on, then we need to find some way to collect the data that is not voluntary, can be backed up with hard data, and that is measured the same way across all organizations. That's why industry standards like HEDIS or JCAHO exist, and the only way to compare things.