Thursday, May 22, 2008

Man bites dog: Levy agrees with MMS

I don't think that anyone who has read this blog over the past several months or heard me give speeches can accuse me of being anything less than a strong advocate for transparency in the health care system. Nor have I hesitated to publicly disagree with the Massachusetts Medical Society when I feel they are being overly protectionist or conservative in their public policy positions. But I admit to sympathy for the MMS' point of view, as set forth in this story by Jeff Krasner in today's Boston Globe.

As noted in the story, the MMS is suing the state's Group Insurance Commission, which runs the health benefit program for state and local government employees, for a failure to properly rank physicians in tiering them by cost and quality. My sympathy comes from the fact that doctors in our hospital have been improperly ranked based on faulty information and, when they have called the GIC's agents on this matter, they have been told, in essence, "Too bad. We'll fix it next year."

It seems to me that the GIC's intentions are good, but the implementation is flawed. If there is a not a clear and timely procedure for correcting incorrect data about a doctor, the agency is unwittingly providing poor information to the public, undermining its very purpose in providing information in the first place. To the extent patients end up paying higher co-pays to see doctors who are improperly ranked, it is unfair. Finally, the persistent publication of erroneous data undermines the efforts of those of us who are encouraging greater transparency by aiding and abetting the opposition of more recalcitrant members of the medical profession. (And to be clear, I know Dr. Bruce Auerbach, the current head of the MMS, and he is a strong advocate of quality improvement and is definitely not in that "recalcitrant" category.)

The article gives a couple of examples of the problems encountered by doctors in their rankings. I invite other MDs out there to share their stories on this blog.

17 comments:

Anonymous said...

Paul;

I do find your comments highly ironic. I remember a fairly sharp exchange when I first started reading this blog a year or more ago, where you said something to the effect that "you folks" need to wake up and smell the coffee regarding accountability and performance transparency. It seems, as in all human experience, that a more accurate picture emerges when the experience strikes close to home.

Or, your attitudes have just evolved since then. The picture IS complex, with points on both sides. (I think a major part of the problem is that any bureaucracy is inherently inefficient and therefore prone to inaccuracies.)

And ps, my comments are intended as friendly ribbing, not finger-shaking. (:

nonlocal MD

Anonymous said...

Ribbing well received, nonlocal! Thanks, as always, for your loyal readership and your well-intentioned and thoughtful comments.

To your point, of course bureaucracies have inefficiencies, but they need not have the "take-it-or-leave-it" response given here, especially when also making repeated pronouncements about the importance of accurate consumer information. And here, too, actual real people will be forced to pay more money to see a doctor they trust just because of a wrong ranking.

Anonymous said...

Mr. Levy,

Interesting post. It sounds like you may have changed your tune a little on the importance of accurate data and rankings in the transparency movement.


http://runningahospital.blogspot.com/2007/12/say-it-aint-so-joe.html

Specifically, would you still stand behind the last paragraph on that older post in light of your recent comments on the MMA and the importance of meaningful data?

Anonymous said...

Paul,

Thank you for taking a second look at this. The issue is not whether transparency and measurement are good or bad. They are both good, can positively affect physician behavior and improve quality. Done poorly, as you stated, they can have unintended consequences, provide poor results and be unfair to patients. That's why it's essential to do this right.

Anonymous said...

Dear Anon 11:24,

I'm not sure I understand your point. Here's what I said at that time. I stand by it, and it is a completely different point than the one I raise here:

Where does this leave us? Well, as I noted in a Business Week article, the main value of transparency is not necessarily to enable easier consumer choice or to give a hospital a competitive edge. It is to provide creative tension within hospitals so that they hold themselves accountable. This accountability is what will drive doctors, nurses, and administrators to seek constant improvements in the quality and safety of patient care. So, even if we can't compare hospital to hospital on several types of surgical procedures, we can still commend hospitals that publish their results as a sign that they are serious about self-improvement.

Anonymous said...

Mr. Levy,

I was just wondering, like others commenting here, if your attitudes have changed.

You previously stated that even if the data on surgical outcomes was not statistically meaningful, it should still be published, "as a sign that hospitals are serious about self-improvement." You didn't seem concerned with possible damage done (to surgeons, hospitals, etc.) by releasing faulty data.

Now, you post about rankings (quality measures) based on faulty information and the harm it may cause, a valid concern not voiced in your earlier post.

It seems like your emphasis on data quality has changed, which was the point of my post.

Anonymous said...

Just wanted to point your attention to a blog called "Repairing the Healthcare System" (http://stanleyfeldmdmace.typepad.com/).

The author is Stanley Feld M.D.,FACP,MACE(http://stanleyfeldmdmace.typepad.com/about.html)

Anonymous said...

I never talked about releasing faulty data, and I don't know why you keep saying that. Just because data does not indicate a statistical difference in performance between institutions or doctors does not mean it is faulty or should not be released. It just means it should not be portrayed as being statistically different.

My attitudes have not changed.

Anonymous said...

Whatever problems exist here should not be that hard to resolve. Insurers want to identify and reward cost-effective care, penalize excessive utilization and steer patients toward the highest quality, most cost-effective providers. Doctors want any ranking system to be transparent and the factors used to be relevant. They also, presumably, would like there to be an appeals process that is timely and fair if they think the data used to determine their rank is wrong or inappropriate.

Claims data is imperfect in that it only shows what was done but does not speak to outcomes. Insurers, I think, are interested not only in the care that a specific doctor provided but in the cost of all care driven by his or her decisions – referrals to other doctors, imaging, labs, drugs, admission to the hospital, etc.

I’m always a bit skeptical of doctors’ complaints given the guild’s decades long history of trying to stifle competition at every turn and their more general resistance to transparency and accountability. It’s not as though these systems have been designed without their input. Provider decisions drive at least 85% of all healthcare costs even though their fees account for only about 22% of costs. Some doctors drive a lot more utilization than others even within the same specialty and with comparable patient population risk profiles. If we are ever going to reduce regional practice pattern variations and squeeze the waste out of the system, there has to be adverse financial consequences for the high utilizers among doctors and hospitals at some point. If anyone knows of a better way to accomplish that, I’m all ears.

Anonymous said...

Barry;

It's kind of like having your performance as a financial analyst measured and publicly ranked by how the companies did that you picked (or something analogous; you know your job better than I). And then your salary would be determined by those measures. To the extent that your resulting ranking comes about from items either irrelevant or out of your control, you would feel that it is unfair.

That said, I agree with you that some docs are just dead set against any rankings. No ranking is going to be entirely fair, but I do think Mass is being a bit arrogant by refusing any type of appeal or due process.

bev MD

Anonymous said...

Well, anon 4:18 prompted me to go back and find your previous comment which I remembered, which is somewhat illuminating in the current context: (april 9, 2007)

"What is is (sic)about medial(sic) training that makes people think they should not be held accountable by something other than anecdotal evidence? I have yet to see a metric that most doctors think is fair, notwithstanding excellent work by IHI and other places to create statistically valid approaches. You folks need to get past that and understand that society has a right to ask these questions. With the government paying about 40% of health care costs, you should also expect legislators to ask them. Sure, it will not always be fair, but who promised that life would be fair?"

So fair is fair, except when it's unfair. (:

And BTW, while scanning the posts I discovered tomorrow is the anniversary of your mother's death. My condolences; I still miss my dad badly after 5 years.

nonlocal MD

Anonymous said...

Beyond the obvious reply that consistency is the hobgoblin of small minds (!) -- if there is indeed any inconsistency with what I have previously written -- and I am still not sure there is, as I am having trouble just now finding the context for the quote you cite -- I just want to point out that the problem I am talking about today with regard to the GIC is its refusal to allow an aggrieved doctor to correct a bureaucratic error with regard to that doctor's data. I am not advocating that there be no ranking system or that a doctor should be permitted to opt out because he or she doesn't like the system because he or she feels it is unfair. I am saying that the failure of the GIC to allow legitimate corrections to data undermines what it is trying to accomplish and imposes unfair costs upon consumers.

Anonymous said...

And . . . thank you for the final thought.

Anonymous said...

Big story in the LA Times--sorry, don't have the url at hand, but it was yesterday--about how docs are screaming about how one report from a "sinister" patient can ruin them. Of course, information is anecdotal. I could tell you a dozen bad stories just from my own family. The reporter seemed to think this antagonism was a result of the crumbling system, but I think there are many factors at work. Maybe a "wrongly classified" doc needs to get relief sooner, but there is a larger picture here.

Anonymous said...

Mr. Levy,

I maintain that publishing data on doctor rankings that doesn't reach statistical significance is publishing faulty data. (i.e. data that has serious faults)

If someone works part time and has a practice with only a handful of diabetics (for whatever reason) then data on outcomes for those patients likely is not statistically meaningful and says little about the doctor in relation to his/her peers. As you know, a small sample size may lead to conclusions being drawn that are inappropriate and likely due to chance.

Similar situation for certain surgical procedures, etc.

(This was pointed out in the "say it ain't so, joe" post, so I don't think I'm saying anything new here.)

Publishing potentiallly meaningless data (from a statistical standpoint) may lead to harm, misinform patients, and is, by most definitions, unfair.
It may have some internal use, (consistent tracking among phsyicians, etc), but should not be published for the public.

Richard Wittrup said...

It's what happens when somebody tries to micromanage something as complex as our health care delivery system.

Eventually, the focus will be on institutions like BEDMC, which will be responsible for the performance of its physicians.

Anonymous said...

Anon 5:42,

The point Joe Newhouse made back then was that there were not enough observations in the speciaties listed to draw statistical differences among the doctors and/or hospitals listed. That does not mean the numbers are not accurate. It just means that they should not be used for comparisons.

That will not apply in all cases. Indeed, the non-public NSQIP data on surgical outcomes is statistically significant. As is the IHI data on overall mortality in hospitals -- also not generally made public.

Here, the GIC is trying to use data to influence people's decisions about which doctors to go to, by varying the co-pay based on some collection of data. I don't know if what they have would be statistical significant if it were accurate, but my point is that it is not accurate because of the GIC's failure to let people correct it in the face of clear errors.

In any event, the point I have made over and over is that transparency's main value is not in creating comparisons among institutions. It is most useful as a way of a given organization holding itself accountable for constant process and quality improvement.