Tuesday, March 20, 2007

Data, data, everywhere


A good and thoughtful follow-up on the Jayco issue this morning by the Globe editorial staff. The final line:

Patients need a reliable source to tell them which institutions do the most to minimize errors and correct those that occur, and have the best outcomes at treating disease.

Who can disagree? But, here is the upshot. There are already tons of public reports about hospital performance on a variety of metrics. Two problems: (1) They are based on administrative (i.e., claims data) rather than more accurate clinical data; and (2) they are way out of date. These numbers might as well not be published at all -- in terms of the useful information they provide to referring doctors and to patients. Think about it, are you going to make a decision about treatment based on numbers that are two years old? The numbers remind me of the accurate, but useless, answer given by the apocryphal student in the exam question displayed above.

The writer says that my proposal for self-reporting is "is no substitute for comparative data from an unbiased source". It is not a substitute, but it is available in real time and it is based on exactly the same data each hospital uses to make decisions about clinical improvement.

I have now heard a lot about the lack of comparability across hospitals. For example, I am told that each place measures central line infections differently. A simple solution: Normalize the results. Let's set June 2006 as the base period for all hospitals, with a value of "1". Each month, show whether the chosen metric has risen or fallen relative to "1". That way, the public can see whether things are getting better or worse in that hospital.

You know, what is striking about this "debate" is the lack of criticism I have received on this blog about this kind of proposal. Indeed, most comments have been quite favorable. If you folks out there who think it is stupid, useless, or otherwise bad would just submit comments (even anonymously), all of our readers would have a chance to judge for themselves. Are you that reluctant to engage here on the issue for fear of giving credibility to this medium (i.e, a blog)? Please: I won't be offended by disagreement. We work in academic medical centers, where open discourse is to be encouraged and treasured. (See the marvelous comments below on an even more controversial subject to understand how vibrant the conversation can be.)

All right, call the MSPCA: I am beating this horse past death!

And a note to the interns, mentioned below: Please read through the various postings and comments on this topic. It will be the major issue facing hospitals during your residency and after. Perhaps your arrival in our institutions will help us all come up with better solutions!

16 comments:

Anonymous said...

Not that I buy into this argument, however, the one that is used at our hospital (in Iowa) is that the only reason to share the data is to compare hospitals. As long as the definitions of what constitutes a surgical site infection (for example) remain ambiguous and are measured differently from hospital to hospital, the comparisons will always be false. I think everyone engaged in the debate has good intentions of lowering the infection rates throughout the country and within their own hospital, but it is imprudent to act on good intentions alone. Sure, the comparison will push hospitals to improve - but if your hospital is consistently ranking low in comparison to others, it is too easy to simply change how you measure the infections than it is to change the system. What the system really needs if it is to work is a trusted watch-dog. I see from reading about the Pennsylvania system, there is an audit process in place. I would like to hear more about that. Does it work? How much does it cost? Bottom-line, it is difficult in business to police yourself, it is even more difficult to believe your competitor accross the road is doing the same.

Jaz said...

As someone who thinks it is - in principle - a good idea, let me play devil's advocate.

If we make a Web site for facilities to self-report based on their chosen method of counting we have no way of ensuring that the relevant chosen method is adhered to over time. You yourself just posted about how your facility is refining the way you're measuring VAP rates.

As soon as the methodology changes, the previous numbers go bye-bye.

The problem this causes is that rates like CLAB and VAP need to be measured over a longer period tham a month.

We, the consumers, don't care whether you went up or down a nudge since last month, we want to see an overall improvement since, say, last year.

Unfortunately, without some kind of legislative requirement, I don't see facilities signing up en masse to both normalize the measurement methodology *and* publish their results. One or the other, never both.

I'd be more satisified to see hospitals joining up and working collaboratively to get these infections down in general; until then we'll take the data any way we can get it.

With the national conversation on "transparency" getting louder by the week, I'm becoming concerned that we're losing the focus of why we report on quality in the first place. It's not reporting for reporting's sake. It's not so we can punish low scorers. It's not so we can pay higher performers. All these things are done with one goal in mind only:

It's supposed to spur improvement.

If anyone can tell me how we can honestly display any meaningful trend in rates over time given self-reporting of internally-defined measures, I will personally build the necessary database and Web site. Which may sound cocky, but I really don't have the answer. If anyone does have the answer, I have the technology and the time.

Rob said...

Jaz hits the nail on the head: What best benefits patient health, period? Putting a hospital or provider out of business may or may not be a positive outcome.

Figuring out a way to improve everything, by focusing, if you will, on the "product" of helping people? That's worth money.

It's similar to the problem General Motors and its ilk have: The car guys got beat out by the money guys, and it became all about margin and units and cost-cutting and not about, oh, say, making good cars. Yes, the other things are necessary for the business, but it's still about making good cars. Period.

Yes. Metrics are invaluable to manage improvement. But they are not the improvement itself. The process should never be the deliverable.

If the author of this blog can come up with a plan where by meaningless competition disappears, I'm all ears and nose and throat. Unlikely, though, because people are, in the end, people.

Anonymous said...

You said in your original post, “I see no competitive advantage” However ideal this sentiment may be….it is probably not widely shared. Hospitals will tweak the data to “look good.” Just like the US News and World report rankings, or the number of favorable write-ups in the Boston Globe, hospitals are also interested in marketing. The best data are likely to come from 3rd party review because of this one reason. You rightly point out the limits of administrative data but your solution is probably not realistic.

On another note, in a litigious society, hospitals that fall in the bottom 20 percentile of any given metric are more likely to be sued.

Let’s say your infection rate is published and shows you have a “high” rate (which is true because you accept a lot of transfers from nursing homes of patients with MRSA), and then one of your patients dies of a central line infection. The first thing plaintiff’s attorneys will do is to “check” the website to see how your hospital is doing. Interestingly, the plaintiff’s attorneys do provide a perverse influence on physicians and hospitals. Unfortunately, the motivation they provide is not based providing the best caregiving experience to the patient but rather to practice defensive medicine.

Anonymous said...

Let's start with Anon 11:43. If the malpractice argument holds, it already exists. Attorneys can obtain information from existing websites, plus they will conduct discovery and get hospital records and depositions in any event.

On tweaking data, I think you don't understand how hard it would be do to that without someone spilling the beans. The numbers we use for central line infections, for example, are circulated to dozens of people in the hospital. They would be sure to notice and comment both interanlly and externally if they were different from those posted on a website. There are no secrets in this environment!

Anonymous said...

To jaz and rob,

OK, let's say methodologies chnage over time. What faster way to have those revealed to the public that on the kind of site I propose. Under the current system, they will show up two years later, if we are lucky.

and rob,

Please see my other postings about how posting helps in self-improvement.

Anonymous said...

New York Magazine which rates hospitals and their specialties often annotate the entries. For instance, Sloan Kettering has more reported cancer deaths because the most dire cases go there.

In something like infections contracted in hospitals, I don't see why competition should enter into it. It should be a basic goal of all hospitals to eliminate them and to share best practices.

Speaking as a former patient, it is the perspicacity and talent of the health care workers that impresses me.

Anonymous said...

To Anon 8:47 and others,

This isn't about policing yourself or sanctions. It is about telling the public in clear terms what we are trying to do to improve, and by demonstrating it with whatever numbers we think are appropriate. It is about trusting the PUBLIC to make judgments. They already do make judgments and read gazillions of webpages trying to learn about their diseases and treatments. Why shouldn't the hospitals offer real information that can also be helpful and informative? (And which, by the way, stimulates everyone to get better and better.) But sure, if you want to add an audit function, by all means do that . . .

Also, if it is IMPOSSIBLE to have comparisons, doesn't that also apply to the two-year old data that is currently posted?

The reductio ad absurdum argument is that nothing is comparable, therefore nothing should be posted. So let's just go back to testimonial ads in the newspaper by Mrs. Smith about how wonderful her heart surgery was.

There are things that are comparable, or close enough. The Institute for Healthcare Improvement has made a studied this for years and has numerous suggestions for various treatment issues. For example, see my posting on reduction of ventilator-associated pneumonia, which is based on their suggested protocols.

Anonymous said...

New York Magazine which rates hospitals and their specialties often annotate the entries. For instance, Sloan Kettering has more reported cancer deaths because the most dire cases go there.

In something like infections contracted in hospitals, I don't see why competition should enter into it. It should be a basic goal of all hospitals to eliminate them and to share best practices.

Speaking as a former patient, it is the perspicacity and talent of the health care workers that impresses me.

Jaz said...

Well, like I said, I agree in principle. I just wonder how much effort the self-reporters will put into telling the Web site about their changes in methodology as and when.

I don't think anyone here is saying nothing is comparable, just internally-defined quality measures that have no consensus in the broader community.

I'm gonna go make it. Let's see what happens.

Anonymous said...

Me, too. And MSK and MGH and the others are GREAT hospitals. But even great places can improve. And we do share advances in the technical aspects clinical care. What is not always shared in great depth is how systemic improvements can be made in an organization. What is sometimes called "the science of care" is not always at the forefront of medical education or medical practice and research in hospitals.

Anonymous said...

Again, as nothing other than a patient: In recent years, hospitals have established home-care units to extend their care and perhaps help to meet expenses. Do you discuss the possibility of establishing health care consultancies that could export your best practices that are basic to all health care institutions? I hear often about the IT innovations at BIDMC. Do you think it's better for your institution to keep your IT proprietary or to make it possible for other institutions to employ them, say, by developing software?

Anonymous said...

I think it's OK for hospitals to use different measurement techniques as long as they describe them and try to maintain consistency within their own institution. It would be helpful, however, if a consensus could be developed for measuring patient risk and then risk adjusting the data. It seems to me that an infection, or even a death, that befalls a frail 80 or 90 something whose immune system may be already compromised is quite different from it happening to a 40 year old who may be in the ICU as a result of an accident, bad asthma attack, etc. Since I'm not a doctor or even a healthcare professional, I have no idea how well developed risk measurement is or what it would take to bring about meaningful improvement. I do think it might be a worthwhile area of focus, however.

Emily DeVoto, Ph.D., said...

OK, I'll jump in. I would say that there are two reasons to report rates: (1) to spur improvement and (2) to allow consumers/purchasers/insurers to compare data. Both are important.

I believe you can produce infection rate data that are comparable between hospitals. Check out the work Missouri is doing with state-mandated public reporting of infections - they are starting with the CDC surveillance methodologies, and working with providers from the ground up so that everyone is counting infections the same way, and with consumers so that the data are useful. In addition, rates are reported on the basis of ICUs, rather than across a whole hospital, and are adjusted to reflect the risk of patients - this of course is key for comparability.

Paul, I think the idea of normalizing all the rates so that you're only looking at changes over time is clever, but it does obscure underlying problems. What if, even though they're improving, hospital A starts out with 10 or 20 times more infections than hospital B? I would want to know, before I decide where to schedule my surgery next month, whether I'm going into a high-infection environment. In other words, the fact of improvement may not be enough for me right now.

Anonymous said...

Please tell us that the answer to the test question is a joke . . .

Anonymous said...

Dunno. I am not sure. It is one of those things that gets passed around in cyberspace. We will never know if was real or not.