I was talking about public reporting the other day with an MD colleague. He pointed out that hospitals often have different definitions for a variety of measures, like ventilator associated pneumonia (VAP). Therefore, he pointed out, public reporting of such measures can be problematic. I said, "No, it's not a problem."
Why not?
Let's look at what we are trying to accomplish. Simply put, we want the hospitals, doctors, and nurses to engage in systemic process improvement in their institutions. What are the elements of doing that? Brent James lays them out quite clearly, based on the concept of shared baselines:
1 -- Select a high priority clinical process;
2 -- Create evidence-based best practice guidelines;
3 -- Build the guidelines into the flow of clinical work;
4 -- Use the guidelines as a shared baseline, with doctors free to vary them based on individual patient needs;
5 -- Meanwhile, learn from and (over time) eliminate variation arising from the professionals, while retain variation arising from patients.
1 -- Select a high priority clinical process;
2 -- Create evidence-based best practice guidelines;
3 -- Build the guidelines into the flow of clinical work;
4 -- Use the guidelines as a shared baseline, with doctors free to vary them based on individual patient needs;
5 -- Meanwhile, learn from and (over time) eliminate variation arising from the professionals, while retain variation arising from patients.
That is the essence. Now where does public reporting come in? The impetus for transparency of clinical outcomes can be found in the writings of MIT's Peter Senge. In The Fifth Discipline, he discusses creative tension.
[T]he gap between vision and current reality is . . . a source of energy. If there was no gap, there would be no need for any action to move toward the vision. Indeed, the gap is the source of creative energy. We call this gap creative tension.
Imagine a rubber band, stretched between your vision and current reality. When stretched, the rubber band creates tension, representing the tension between vision and current reality. What does tension seek? Resolution or release. There are only two possible ways for the tension to resolve itself: pull reality towards the vision or pull the vision towards reality. Which occurs will depend on whether we hold steady to the vision.
[T]he gap between vision and current reality is . . . a source of energy. If there was no gap, there would be no need for any action to move toward the vision. Indeed, the gap is the source of creative energy. We call this gap creative tension.
Imagine a rubber band, stretched between your vision and current reality. When stretched, the rubber band creates tension, representing the tension between vision and current reality. What does tension seek? Resolution or release. There are only two possible ways for the tension to resolve itself: pull reality towards the vision or pull the vision towards reality. Which occurs will depend on whether we hold steady to the vision.
So the deal is this. You establish an audacious goal for your organization, one that truly stretches everyone. You publish that target for the world to see, and you also regularly publish your progress towards that target. The gap between the current state and the future state helps to drive your organization towards the target.
As I have mentioned:
Transparency's major societal and strategic imperative is to provide creative tension within hospitals so that they hold themselves accountable. This accountability is what will drive doctors, nurses, and administrators to seek constant improvements in the quality and safety of patient care.
Transparency's major societal and strategic imperative is to provide creative tension within hospitals so that they hold themselves accountable. This accountability is what will drive doctors, nurses, and administrators to seek constant improvements in the quality and safety of patient care.
There is nothing in this construct that requires one hospital to use the same metrics as another. Indeed, I would suggest that having an external authority (e.g., a regulatory agency) establish a common metric will often undermine, rather than support, process improvement. Why? Because the internal constituencies who must buy off on the need for process improvement will question the applicability and accuracy of that metric. Resentment will arise, and progress will slow down.
I can feel people getting antsy now. "We need comparability in public reporting so consumers will know how to choose among hospitals." Nonsense. There is virtually no evidence that the public uses clinical information from websites to make choices as to where they get treatment. Jeez, when Bill Clinton needed heart surgery in New York, where mortality rates of the hospitals are publicly available, he went to one that had among the highest figures. (OTOH, maybe Hillary sent him there . . . but that's another story!)
I have addressed this point before, also.
There are often misconceptions as people talk about "transparency" in the health-care field. They say the main societal value is to provide information so patients can make decisions about which hospital to visit for a given diagnosis or treatment. As for hospitals, people believe the main strategic value of transparency is to create a competitive advantage vis-à-vis other hospitals in the same city or region. Both these impressions are misguided.
Seriously, are you really likely to decide on where to get ICU care based on the rate of VAP? Even for elective surgery, you are most likely to go to the hospital or specialist recommended by your primary care doctor. If you have cancer, you don't choose hospitals based on infection rates. You do your research and make your choice based on many other factors (e.g., empathy of doctors, availability of clinical trials.)
I want to be clear that there is value in having a government requirement for transparency, but -- in most cases -- I would leave it up to the individual hospitals to use the definition of each metric that most suits them. If we tell them what metric to use, we have taken away the self-accountability that we want. Require them to post their goal and their progress. Let them add editorial comments about why they chose the metric they did. What we want to see is that they improve and that they maintain and sustain their improvement. Comparability with other hospitals simply does not matter.
Hi Paul. Your post reminded me of this http://goo.gl/C3ovd a very short article about how benchmarking is the fastest route to mediocrity.
ReplyDeleteExcellent. Thank you.
ReplyDeleteA third possibility is for the rubber band to stretch to the point that it breaks.
ReplyDeleteHah! Good observation. You want to set the goal to be both aggressive and achievable.
ReplyDeleteWe can say that we're not so good at measuring most phenomenon. We're pathetic, really, when it comes to human behavior. So why bother? We would have never have a flu vaccine if we threw up our hands after Mendelian genetics proved too simple for most biology.
ReplyDeleteGive scientists a little money to compete (or consultants who know less six times as much), and in no time, there will be a replicable standard for measure. Now open the doors and give hospitals incentives (or penalty) to compete on transparency. Not on metrics alone, but on observable transparency.
It isn't that safety measures are poor (some are - but that isn't a problem that science can't take care of in a competitive heartbeat). It is the demand for medicine to be a science that is.
Well, Paul, now look at what you’ve done. I read your blog posting and said to me “what do you think of that?” In the last 24 hours I’ve received two emails from friends saying “what do YOU think of that?” So…
ReplyDeleteAt one level I’ve seen organizations spending excessive resource “fussing” in search of the perfect measure to explain their practice, spending time endlessly benchmarking in search of the right number that allows them to look good or “hide,” and in the process wasting all their time on number crunching and not improvement.
It is amazing what you can accomplish using Brent’s steps. At the same time, most of us need help, we are not Brent nor do we have him team or resources. Having the counsel of a respected party offering up valid measures can be a very good thing. It can help all of us draw on the evidence, be balanced in our approach (and not just see the issue through our biased lens), build on the learning of others, and do our annual benchmarking and target setting. I’ve seen people in MA take Ken Sands work at BIDMC in measuring harm and bring it to their organization—we did it at Winchester Hospital. The fact that we use the same approach allows us to share, challenge, improve, and build.
Comparability is needed to the extent it allows us to measure and assure accountability to essential healthcare standards and outcomes; it is required for public reporting. I do believe we must be held accountable publicly for our outcomes. The MA Health Care Quality and Cost Council have set some important and practical guidelines for measures. Then, comparability is also a gift to the degree it allows us to build and share knowledge and learning, set bold targets, and then achieve them.
Senge told us we need to confront reality. What I have found most helpful of the work of groups such as the NQF and their work on comparability is that “beauty is no longer just in the eye of the beholder,” and one this base of evidence and comparability, we can make exceptional progress and improve together.
Thanks, Jim
Do you really think doctors are capable of objectively measuring their own performance, even when measures are not tied to reporting or performance incentives? I worked for a community health center that wanted to see how controlled hypertensive patients' blood pressure was. The CMO asked me to average the latest BP reading from all patients with a hypertensive diagnosis in the last year (not just new hypertensives). The result was quite high. He then asked me to calculate the average of the average of each patients' last three BP readings. The result was lower, but still high. He then asked me to average the lowest of each patients' last three BP readings, and the result was quite close to normal. All the MDs congratulated themselves on how well they were controlling BP in hypertensive patients.
ReplyDeleteTo be clear, this was not part of any public reporting or P4P initiative. This was purely for "quality improvement" purposes for that only that CHC. It gives me very little faith in self-reported performance measures.
I'm so glad you commented, Jim, as it allows me to clear up a couple of points.
ReplyDeleteFirst, I absolutely agree that almost everyone can use help and learn from others. Any quality control program worth its salt will attempt to do so, to see what measures have been most effective in other places. And also share stories of how improvement actually occurred.
It is this paragraph that I find troubling:
"Comparability is needed to the extent it allows us to measure and assure accountability to essential healthcare standards and outcomes; it is required for public reporting. I do believe we must be held accountable publicly for our outcomes. The MA Health Care Quality and Cost Council have set some important and practical guidelines for measures. Then, comparability is also a gift to the degree it allows us to build and share knowledge and learning, set bold targets, and then achieve them."
Being held publicly accountable does not mean we have to use the same definitions for everything. Sure, some things are easily made standard, e.g., "door to balloon time" for patients arriving with chest pain. For others, some minor differences in the metric's definition is inconsequential -- as long as the hospital clearly states the basis for the metric and holds to it over time -- so we can watch the trend.
So, I don't belief that comparability is the essence of the gift you describe. Transparency is.
Dear anon 8:11,
ReplyDeleteYou ask "Do you really think doctors are capable of objectively measuring their own performance, even when measures are not tied to reporting or performance incentives?"
I admit that there are some people who will do as you say, whether a metric is publicly reported or not. I do, however, think it is a small minority.
I don't see that such people are more or less likely to do so based on whether the metric is their own or one imposed by the state.
Doctors are not trained to test their own or each others' work. If someone gets better, they leave the office. If they don't, it is their fault for not taking better care of themselves. Very few physicians, or physician leaders use practice data epidemiologically. (Volume, morbidity mix, acuity, yes. But to understand performance?) The move to ACOs will show this gap dramatically as practices will be accountable for outcomes that they have little experience thinking about or measuring.
ReplyDeleteIt is time for medicine to start doing, not just using, science. Testability, replicability, and best practice diffusion require accountability. The CMO in Anon 8:11am didn't want to know how well patients are cared for, but how the numbers would look in public. We should strongly question the culture that allows a CMO to believe that (blatantly dishonest) manipulation of data is even an option.
Why isn't CMS asking for raw and managed data? Perhaps there could be demand generated for 'supplemental' data with the digested DPH/CMS numbers that is closer to the reality of medical practice. In the same way that epidemiologists can finally mull over some of the CMS data (showing how poorly measures are measured), we might finally begin to understand how far we have yet to evolve.
And wouldn't the city or state health department want to know how well blood pressure is being controlled? (If only, cynically, to predict ED usage).
Where we seem to completely agree: The greatest benefit of public reporting is the creation of internal desire and drive to change the behaviour and performance of the organization. And the group doing the reporting needs to own the metric, particularly in the early days. Sometimes you just have to trust the group will come up with a metric that is meaningful to the patient. This doesn’t always happen but I’ve learned to be more patient - eventually they seem to find their way.
ReplyDeleteBut I want to add a requirement for how to report any institution’s data. It is important to also have public reporting of the definition used including clear descriptions of the numerator and denominator. When I want to learn how to improve my ICU’s performance metrics, one of the first thing I try to do is look for positive deviants out there that I can study and borrow/steal from. When I do this, I often learn their success is determined primarily by their definition. Its easy to improve your VAP rate if you only count the patients who are at least risk. I’ve seen this time and again. (I risk heresy here when I say that I don’t personally believe anyone who has a rate of 0. They either have a wonky definition or they look after not-very-sick ICU patients. But that blasphemous belief doesn’t stop me from trying to get our ICUs to zero).
I am a physician who was once a skeptic about performance improvement but experienced an almost Paul-on-the-road-to-Damascus conversion into the fold, under duress, several years ago. When that happens, one acquires a true hunger to know more and to excel at it, and a full understanding of what this truly means for patients when you do it right.
ReplyDeleteTo me, the central question here is one of how to convert CEO's, physicians and staff into the fold - because once you do that, all the disagreements and fears about metrics and transparency and incentives and penalties simply disappear. It is blindingly obvious to the converted that performance improvement must be embedded into your culture, and that's all there is to it.
Paul's post is written by a convert. Some of the comments on the post demonstrate that the non-converted will always find a way to game the system, no matter how you construct it - accountability, penalties, incentives, comparability whatever.
So, I ask - how can we convert those who need it? After that the task is easy.
nonlocal
Thanks, Paul, for your thought-provoking proposal to have each hospital post its rate and pattern of improvement.
ReplyDeleteThe state's department of public health should post a graph showing a hospital's infection rate month by month. That way, even if hospitals use different definitions of infections, it will be readily apparent whether or not a given hospital has been improving (assuming, of course, that they haven't changed the way they measure infections from month to month).
When I made this suggestion today at the meeting of the Massachusetts Coalition for the Prevention of Medical Errors, it was welcomed. The Massachusetts DPH now has 3 years of data on hospitals' infection rates. Let's hope reporting improvement rates becomes standard practice.