tag:blogger.com,1999:blog-32053362.post7150182480450177041..comments2024-03-29T06:37:18.029-04:00Comments on Not Running a Hospital: Comparability doesn't matterPaul Levyhttp://www.blogger.com/profile/17065446378970179507noreply@blogger.comBlogger13125tag:blogger.com,1999:blog-32053362.post-74145047799119172002012-02-14T16:43:10.661-05:002012-02-14T16:43:10.661-05:00Thanks, Paul, for your thought-provoking proposal ...Thanks, Paul, for your thought-provoking proposal to have each hospital post its rate and pattern of improvement.<br /><br />The state's department of public health should post a graph showing a hospital's infection rate month by month. That way, even if hospitals use different definitions of infections, it will be readily apparent whether or not a given hospital has been improving (assuming, of course, that they haven't changed the way they measure infections from month to month). <br /><br />When I made this suggestion today at the meeting of the Massachusetts Coalition for the Prevention of Medical Errors, it was welcomed. The Massachusetts DPH now has 3 years of data on hospitals' infection rates. Let's hope reporting improvement rates becomes standard practice.Ken Farbsteinnoreply@blogger.comtag:blogger.com,1999:blog-32053362.post-57412814775871381362012-01-31T15:21:53.111-05:002012-01-31T15:21:53.111-05:00I am a physician who was once a skeptic about perf...I am a physician who was once a skeptic about performance improvement but experienced an almost Paul-on-the-road-to-Damascus conversion into the fold, under duress, several years ago. When that happens, one acquires a true hunger to know more and to excel at it, and a full understanding of what this truly means for patients when you do it right.<br /><br />To me, the central question here is one of how to convert CEO's, physicians and staff into the fold - because once you do that, all the disagreements and fears about metrics and transparency and incentives and penalties simply disappear. It is blindingly obvious to the converted that performance improvement must be embedded into your culture, and that's all there is to it.<br /><br />Paul's post is written by a convert. Some of the comments on the post demonstrate that the non-converted will always find a way to game the system, no matter how you construct it - accountability, penalties, incentives, comparability whatever.<br /><br />So, I ask - how can we convert those who need it? After that the task is easy.<br /><br />nonlocalAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-32053362.post-72781067729167021342012-01-31T10:21:40.843-05:002012-01-31T10:21:40.843-05:00Where we seem to completely agree: The greatest be...Where we seem to completely agree: The greatest benefit of public reporting is the creation of internal desire and drive to change the behaviour and performance of the organization. And the group doing the reporting needs to own the metric, particularly in the early days. Sometimes you just have to trust the group will come up with a metric that is meaningful to the patient. This doesn’t always happen but I’ve learned to be more patient - eventually they seem to find their way.<br /><br />But I want to add a requirement for how to report any institution’s data. It is important to also have public reporting of the definition used including clear descriptions of the numerator and denominator. When I want to learn how to improve my ICU’s performance metrics, one of the first thing I try to do is look for positive deviants out there that I can study and borrow/steal from. When I do this, I often learn their success is determined primarily by their definition. Its easy to improve your VAP rate if you only count the patients who are at least risk. I’ve seen this time and again. (I risk heresy here when I say that I don’t personally believe anyone who has a rate of 0. They either have a wonky definition or they look after not-very-sick ICU patients. But that blasphemous belief doesn’t stop me from trying to get our ICUs to zero).Susan Shaw, MDnoreply@blogger.comtag:blogger.com,1999:blog-32053362.post-27836555395993000212012-01-31T09:38:08.067-05:002012-01-31T09:38:08.067-05:00Doctors are not trained to test their own or each ...Doctors are not trained to test their own or each others' work. If someone gets better, they leave the office. If they don't, it is their fault for not taking better care of themselves. Very few physicians, or physician leaders use practice data epidemiologically. (Volume, morbidity mix, acuity, yes. But to understand performance?) The move to ACOs will show this gap dramatically as practices will be accountable for outcomes that they have little experience thinking about or measuring. <br /><br />It is time for medicine to start doing, not just using, science. Testability, replicability, and best practice diffusion require accountability. The CMO in Anon 8:11am didn't want to know how well patients are cared for, but how the numbers would look in public. We should strongly question the culture that allows a CMO to believe that (blatantly dishonest) manipulation of data is even an option. <br /><br />Why isn't CMS asking for raw and managed data? Perhaps there could be demand generated for 'supplemental' data with the digested DPH/CMS numbers that is closer to the reality of medical practice. In the same way that epidemiologists can finally mull over some of the CMS data (showing how poorly measures are measured), we might finally begin to understand how far we have yet to evolve.<br /><br />And wouldn't the city or state health department want to know how well blood pressure is being controlled? (If only, cynically, to predict ED usage).Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-32053362.post-24998229270988987642012-01-31T08:56:26.323-05:002012-01-31T08:56:26.323-05:00Dear anon 8:11,
You ask "Do you really think...Dear anon 8:11,<br /><br />You ask "Do you really think doctors are capable of objectively measuring their own performance, even when measures are not tied to reporting or performance incentives?"<br /><br />I admit that there are some people who will do as you say, whether a metric is publicly reported or not. I do, however, think it is a small minority.<br /><br />I don't see that such people are more or less likely to do so based on whether the metric is their own or one imposed by the state.Paul Levyhttps://www.blogger.com/profile/17065446378970179507noreply@blogger.comtag:blogger.com,1999:blog-32053362.post-1079941024565457172012-01-31T08:54:27.020-05:002012-01-31T08:54:27.020-05:00I'm so glad you commented, Jim, as it allows m...I'm so glad you commented, Jim, as it allows me to clear up a couple of points.<br /><br />First, I absolutely agree that almost everyone can use help and learn from others. Any quality control program worth its salt will attempt to do so, to see what measures have been most effective in other places. And also share stories of how improvement actually occurred.<br /><br />It is this paragraph that I find troubling:<br /><br />"Comparability is needed to the extent it allows us to measure and assure accountability to essential healthcare standards and outcomes; it is required for public reporting. I do believe we must be held accountable publicly for our outcomes. The MA Health Care Quality and Cost Council have set some important and practical guidelines for measures. Then, comparability is also a gift to the degree it allows us to build and share knowledge and learning, set bold targets, and then achieve them."<br /><br />Being held publicly accountable does not mean we have to use the same definitions for everything. Sure, some things are easily made standard, e.g., "door to balloon time" for patients arriving with chest pain. For others, some minor differences in the metric's definition is inconsequential -- as long as the hospital clearly states the basis for the metric and holds to it over time -- so we can watch the trend.<br /><br />So, I don't belief that comparability is the essence of the gift you describe. Transparency is.Paul Levyhttps://www.blogger.com/profile/17065446378970179507noreply@blogger.comtag:blogger.com,1999:blog-32053362.post-1493563525424264882012-01-31T08:11:33.442-05:002012-01-31T08:11:33.442-05:00Do you really think doctors are capable of objecti...Do you really think doctors are capable of objectively measuring their own performance, even when measures are not tied to reporting or performance incentives? I worked for a community health center that wanted to see how controlled hypertensive patients' blood pressure was. The CMO asked me to average the latest BP reading from all patients with a hypertensive diagnosis in the last year (not just new hypertensives). The result was quite high. He then asked me to calculate the average of the average of each patients' last three BP readings. The result was lower, but still high. He then asked me to average the lowest of each patients' last three BP readings, and the result was quite close to normal. All the MDs congratulated themselves on how well they were controlling BP in hypertensive patients.<br /><br />To be clear, this was not part of any public reporting or P4P initiative. This was purely for "quality improvement" purposes for that only that CHC. It gives me very little faith in self-reported performance measures.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-32053362.post-78861180793056722522012-01-31T07:52:31.827-05:002012-01-31T07:52:31.827-05:00Well, Paul, now look at what you’ve done. I read...Well, Paul, now look at what you’ve done. I read your blog posting and said to me “what do you think of that?” In the last 24 hours I’ve received two emails from friends saying “what do YOU think of that?” So…<br /><br />At one level I’ve seen organizations spending excessive resource “fussing” in search of the perfect measure to explain their practice, spending time endlessly benchmarking in search of the right number that allows them to look good or “hide,” and in the process wasting all their time on number crunching and not improvement. <br /><br />It is amazing what you can accomplish using Brent’s steps. At the same time, most of us need help, we are not Brent nor do we have him team or resources. Having the counsel of a respected party offering up valid measures can be a very good thing. It can help all of us draw on the evidence, be balanced in our approach (and not just see the issue through our biased lens), build on the learning of others, and do our annual benchmarking and target setting. I’ve seen people in MA take Ken Sands work at BIDMC in measuring harm and bring it to their organization—we did it at Winchester Hospital. The fact that we use the same approach allows us to share, challenge, improve, and build. <br /><br />Comparability is needed to the extent it allows us to measure and assure accountability to essential healthcare standards and outcomes; it is required for public reporting. I do believe we must be held accountable publicly for our outcomes. The MA Health Care Quality and Cost Council have set some important and practical guidelines for measures. Then, comparability is also a gift to the degree it allows us to build and share knowledge and learning, set bold targets, and then achieve them. <br /><br />Senge told us we need to confront reality. What I have found most helpful of the work of groups such as the NQF and their work on comparability is that “beauty is no longer just in the eye of the beholder,” and one this base of evidence and comparability, we can make exceptional progress and improve together. <br /><br />Thanks, JimJim Conwaynoreply@blogger.comtag:blogger.com,1999:blog-32053362.post-1550568381058767332012-01-30T21:44:28.414-05:002012-01-30T21:44:28.414-05:00We can say that we're not so good at measuring...We can say that we're not so good at measuring most phenomenon. We're pathetic, really, when it comes to human behavior. So why bother? We would have never have a flu vaccine if we threw up our hands after Mendelian genetics proved too simple for most biology.<br /><br />Give scientists a little money to compete (or consultants who know less six times as much), and in no time, there will be a replicable standard for measure. Now open the doors and give hospitals incentives (or penalty) to compete on transparency. Not on metrics alone, but on observable transparency. <br /><br />It isn't that safety measures are poor (some are - but that isn't a problem that science can't take care of in a competitive heartbeat). It is the demand for medicine to be a science that is.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-32053362.post-15891471901739397822012-01-30T13:17:07.331-05:002012-01-30T13:17:07.331-05:00Hah! Good observation. You want to set the goal ...Hah! Good observation. You want to set the goal to be both aggressive and achievable.Paul Levyhttps://www.blogger.com/profile/17065446378970179507noreply@blogger.comtag:blogger.com,1999:blog-32053362.post-71287868838631476252012-01-30T13:07:08.813-05:002012-01-30T13:07:08.813-05:00A third possibility is for the rubber band to stre...A third possibility is for the rubber band to stretch to the point that it breaks.Larry Ginsburg, MDnoreply@blogger.comtag:blogger.com,1999:blog-32053362.post-12103913890430505982012-01-30T10:07:49.180-05:002012-01-30T10:07:49.180-05:00Excellent. Thank you.Excellent. Thank you.Paul Levyhttps://www.blogger.com/profile/17065446378970179507noreply@blogger.comtag:blogger.com,1999:blog-32053362.post-46599949823566318892012-01-30T10:06:41.495-05:002012-01-30T10:06:41.495-05:00Hi Paul. Your post reminded me of this http://goo...Hi Paul. Your post reminded me of this http://goo.gl/C3ovd a very short article about how benchmarking is the fastest route to mediocrity.Anonymoushttps://www.blogger.com/profile/05356141124426930937noreply@blogger.com