Saturday, July 14, 2007

Above averages

As a follow-up to yesterday's post, guess which of the following stories is true and which is false:

Red Sox slugger David Ortiz has told manager Terry Francona that he will no longer bat against any pitcher who has an ERA below 3.0. Ortiz, furious that his batting average has been made public, said "It just isn't fair that they include my at-bats against the really hard pitchers. No one is going to think I am good at this game." Ortiz has said that he will sit out games until the starting pitcher is relieved and replaced by someone less difficult to hit against. "I don't care if this causes my team to lose," he was heard to say. "I have a career to think about." Francona has yet to respond publicly.

OK, you already know that's not true! Our local hero thrives in taking on the really good pitchers. So, here's the actual story from SFGate.com.

California health authorities on Thursday released a study showing for the first time how many heart bypass patients die after surgery, the names of their surgeons and the hospitals where the operations were performed....


Dr. Ismael Nuno was furious with [his worse than average] rating. "I've had a very illustrious career, and when my name comes out tomorrow I might just retire," he said in a phone interview. "Nobody in the state is going to write right next to your name that Dr. Nuno tried really hard to keep this patient alive. All it's going to say is Dr. Nuno is a terrible surgeon."

Nuno warned that some surgeons already are turning away patients with poor outcomes for fear they'll get tagged as bad doctors. "People are dying because of what the state of California is doing. Surgeons are walking away and saying, 'Tough, it's either my career or your death.' "

It looks like Dr. Lee and his colleagues have some more empirical support for the conclusions of their article.

OK, I know this is not a fair comparison, and I don't make it to disparage this doctor, who, by all accounts, is a very fine surgeon. Beyond having a little fun with the topic, I make it to frame the question:

"Why are many doctors so sensitive and/or resistant on these matters while people in other fields have come to accept public reporting of their results?"

I look forward to your answers.

43 comments:

  1. An interesting parallel to consider is education, the field I work in. There's a growing movement to measure student and teacher performance, and to report results. Teachers rightly point out that student achievement is influenced by many factors beyond their control: family involvement, income, educational level of parents, etc, and that it's unfair to assess their ability as educators entirely on the performance of their students. In the same way, physicians are concerned that we don't yet have adequate systems to adjust for patient severity. This is the same concern of teachers who work in schools in poorer and less privileged communities. I am sympathetic to the concerns of these teachers, as I am to those of Drs. Lee, Torchiana, Lock and Nuno. But in the same way that I and other parents want to know how our kids's schools and teachers are doing, we also want to have as much information as possible on the performance of the people who provide medical care to us and our families. Lots of complicated factors influence the outcomes of work for many of us but that's no excuse for not measuring how we're doing, and letting the people we serve know the results. Until we start doing this, we're not going to advance the art of data collection, measurment or improvement as much as we must.

    ReplyDelete
  2. Hi Paul,

    I'm not familiar with public reporting in other fields, but I wonder how much of the objection is a result of the personal nature of the reporting. In other fields, is the reporting more on an institutional or group level? If so, the criticism and the subsequent burden is distributed among many individuals, while this sort of reporting for physicians affects a single individual.

    Individual physicians certainly have more responsibility than baseball players, and they also don't have the time to tailor and micromanage their public reputations. A baseball player might achieve more glory in the press and respect from their fans by batting against hard pitchers, but what incentives are there for physicians to take high risk patients? Individual public reporting is a strong disincentive for physicians to take any risk in their practices.

    Furthermore, I'd like to think that medicine generally attracts people who excel in being both good in character and intelligent. Many other fields attract people with either of these qualities, but not necessarily both at the same time to the same degree (though there are always special individuals in every field who break the mold). This profession is supposed to reward people for being both of these, and our moral compulsion, ethical conduct, and compassion calls upon physicians to treat all patients to the best of their abilities - no matter the risk (reputation and financial stability should be the last concerns on their minds). For a long time, the judgment and performance of physicians have gone unquestioned, and they have been very well compensated. However, it seems that as the medical field becomes more business-oriented and litiginous, the part of medicine that should encourage physicians to be good, compassionate people is tossed by the wayside. As we strive for reforms aiming at improvements in medicine and health care, it's quite possible that we can't have everything: we can't expect physicians to give up their autonomy, respect, and great compensation and still have the aggressive drive for excellence that physicians in America have that physicians in many other countries do not. We also cannot expect future generations of physicians to adapt to all of the new burdens placed on them without the loss of some of the qualities that make today's physicians respectable and capable members of their communities. While some may view physicians as being overpayed, overpowered, and arrogant (sometimes with a degree of jealousy), do we have much to gain from clipping the wings and putting blinders on the eyes of our talented, our good-willed, and our leaders? We want our doctors to focus 100% on treating our ills, not our potential impacts on their reputations and their pocketbooks.

    At the same time, though, I think it is appropriate for the public to call upon the medical profession to police itself better and hold its members to its high standards: external policing and performance measurement have arisen because physicians have been unable to constructively criticize one another and maintain an acceptable level of homogeneity. However, this external foresight has come at great costs to the profession and its beneficiaries. Instead of pressing forth in that direction, perhaps physicians should step up to the plate and take it from here?

    ReplyDelete
  3. Its this very reason that I send all of my complicated patients down the road to the university. There is such demand for your services with your other less complicated patients, that the extra time spent and possible repercussions from policies just like this give no incentive to do so.

    ReplyDelete
  4. Paul -- I generally love your take on things, and agree with your complaints about how doctors can work to stifle competition.

    But I think you are getting it wrong on quality measures. The actual measures being proposed and used are severely flawed. When someone pointed this out in another thread about hospital quality measures, you told them to take it up with JCAHO and the like.

    You can't reasonably take that position, and yet expect doctors to be happy about being graded on such measures.

    Bad measures of quality can mean the results are random or worse. They can create perverse incentives, like refusing to perform surgery on sick patients, or refusing to be the primary care provider for a diabetic who won't take his medications as prescribed.

    Lots of doctors would object to being graded on even good quality measures, since no one likes to be graded. But that's not the situation we are in -- the actual quality measures being used often score doctors as having done something wrong even when they are providing optimal care.

    I think if you really want to see quality measures used to improve medical quality, you (Paul Levy) need to get involved personally to improve them. This is the sort of thing that requires an understanding of incentives, business, and costs. Don't expect reasonable doctors to accept unreasonable quality measures.

    ReplyDelete
  5. Paul: The obvious problem is contained in your parody itself. David Ortiz does not have the choice as to whether he bats against difficult pitchers. Surgeons have the choice as to whether they accept complicated patients and (more insidiously) as to whether they operate. Nobody cares if physician's feelings are hurt by quality rankings. What we CARE about is whether they are creating are creating a perverse incentive not to operate on patients who are "below the mean". Do you really think the guy with end stage lung disease patient is getting a CABG when his likely death is going to lead to public embarassment? Like most insurer-backed ideas, this is a cleverly phrased way of extracting more money at the advantage of the sickest people - in this case by making surgeons not want to operate on them.

    ReplyDelete
  6. In what other field is the performance of individual professionals widely reported publicly? I am genuinely curious about this--as a physician (a hospitalist) myself, I am not intrinsically opposed to public reporting of individual performance measures, but I am not aware of this being standard practice in other industries. I can't speak for everyone, but perhaps some of the resistance arises from the natural reluctance to be a guinea pig.

    ReplyDelete
  7. The philosophical issues are important here as noted above.

    I would like to raise an epidemiologic issuer. And it's not very original.

    But I don't think the comparison to baseball batting averages is apt.

    Over the season, most position players on a given team, or in a given league, for that matter, face the same population of pitchers.

    Doctors, however, have their own unique populations of patients. Some doctors see patients who are sicker, have more complex problems, have more illnesses, than do other doctors. If their "batting averages" are not corrected for these differences, those who take care of sicker, more complex, or more chronically ill patients are likely to have worse results, regardless of their skills.

    If they are judged by performance measures that do not account for sickness, complexity, and co-morbidity, the results may be perverse, penalizing the doctors who work the hardest to take care of the patients with the poorest prognoses.

    And it turns out fairly accounting for sickness, complexity, and co-morbidity is methodologically difficult, so most performance measures don't really do it well.

    That's why a lot of us doctors are worried about these measures.

    The business folks who often push these measures don't seem to understand how complicated the medical context is, and how hard it is to measure differences in patients that could affect performance measures. So their blithe assurances that the measures or work aren't very convincing.

    And when they start labeling doctors whiners and complainers, it's even less convincing.

    PS - The comparison would be more apt if baseball allowed unlimited, repeated substitution of pitchers, so that a team could repeatedly use its best pitchers only to pitch to the opposing team's best batters. So in that hypothetical game, David Ortiz might only face the aces on the pitching staff. Then it would be right for him to complain that simply comparing his batting average to other hitters, who might face the lowliest members of the opposing teams' pitching staffs, would be unfair, unless the averages were somehow corrected for the skill of pitchers.

    I guess we can be thankful that's not how baseball is played.

    ReplyDelete
  8. Fair enough...I guess we will have to have Mr. Ortiz hit in all nine slots, He may have to catch Wakefield too...Mirabelli is hitting below .200.

    Does Ortiz hit well at 3am? In the rain, against a pitching a young and healthy Randy Johnson...

    ReplyDelete
  9. OK folks, of course the baseball example I gave was not apt, but I did it to make a point and, as noted, to stimulate some conversation on this issue.

    But really, most of you are not thinking like patients. What information would you like or your primary care doctor to have to be able to choose among doctors for a particular kind of treatment? Do you want to base the decision on anecdotes, or do you want to have have more quantifiable criteria?

    If the latter, do you really think that all the very intelligent doctors in America cannot come to an agreement on reasonable metrics to measure outcomes in, say, cardiac surgery? Especially in specialties like that, the techniques for doing risk adjustment are pretty well established. To David's point: It is NOT my job to do this. It is the physicians' job. If they do not, they will have metrics imposed upon them by state legislatures.

    SR asked, in what other field are an individual's outcomes publicized? I can think of a few. The most obvious are the investment managers of mutual funds and other investment vehicles. Also, the success rates for both plaintiff and defense lawyers is widely known by attorneys.

    Yes, I know they are not dealing with life and death. Wait, a defense attorney might actually be dealing with life and death, or at least time in prison. An investment person is dealing with the retirement funds of a great number of people, not quite life and death, but certainly related to quality of life.

    But whether there are other apt examples is not quite the point. Medicine is an intensely personal craft. The individual doctor can make the difference.

    By the way, if you don't want to measure physician performance, do you still want to measure hospital performance? After all, hospitals could decide not to take the high risk cases, too, just to keep their numbers up.

    Look, everyone who calls me says, "Please get me the best doctor." I am guessing every one of you would ask for the same thing for yourself or a loved one. If metrics are not developed and put in use, you will never know if you are seeing the best possible person.

    ReplyDelete
  10. I enjoy your blog. I am on an adminstrative team. Below is a response to me from my sister, a resident.

    "Wow - where do I begin? The reason this guy doesn't understand any of this is because he is not a doctor. He does not go to the hospital everyday and see patients. He has never gotten out of his car in the parking garage, been happy about his patient who had surgery who was doing so well last night and should be discharged today only to get inside and find out that despite using heparin and the patient getting up and out of bed, he had a huge blood clot that went to his lungs and had to be intubated and taken to ICU overnight. So, the guy feels horrible. He can't sleep for 2 weeks because he's wondering what he could have done to prevent what had happened. And because he doesn't feel bad enough already, now his records won't show that the surgery went well, or that he followed all the protocols laid out by various medical committees in the hospital, or that the patient was doing well and about to be discharged. It will show that he had a complication, one that he probably could have done nothing about, just a complication. So you and I are looking for a surgeon. We look at his record. We have no idea what happened but all we see is "complication," so we don't go to this surgeon. So not only is this doc being punished for something he had no control over, you and I are suffering because we just passed up going to one of the best surgeons in town because we have no idea about what really happened.

    There is no "priesthood" to our profession. We all went to medical school and we were all taught that medicine is not just a science, it is an art. Doctors know that you can not objectively judge art. What is a complication? The patient dies during surgery. What if he was a trauma and was basically dead already? The patient needs to have his surgical drains in 2 days longer than expected. What if he is a fat ass and his surgery took a long time and was more extensive than normally required for his type of procedure? The patient went home and her wounds didn't heal and reopened. What if she is a noncompliant diabetic who didn't feel like it was important to change her dressings when she was discharged from the hospital? Hospital administrators need to quantify things and are happy to count the number of "complications" per doctor. Are they looking at the "what if..." about each case? Who are they actually protecting when they make their lists of good doc, bad doc?

    I am going to go take my test this morning. Another test in my quest to become an independently thinking, autonomous physician whose abilities to practice good medicine are being slowly more and more restricted as middle management thinks of new ways to police me for trying to do the best job I can with what I have to work with. Have a good day.

    ReplyDelete
  11. There are far more people who know how to interpret Big Papi's (or any major league baseball player's) batting average than who know how to interpret surgical outcome data.

    ReplyDelete
  12. Ah, so many putdowns in such a short comment! "Have a nice day," indeed.

    You are a scientist. You will soon be an attending physician. How would you like your performance to be judged by your peers and by referring physicians? By the way, it will certainly be judged, so you should think about that. If you like it to be based on anecdotal evidence, fine.

    My feeling -- and it is just "this guy's" opinion -- is that Americans want to have some systematic basis for judging physician performance. Or some basis for the decision by a referring doctor to send them to Doctor Smith rather than Doctor Jones.

    This is not a question of administrators wanting to police doctors. I will humbly suggest to you that you not use that as an argument. It sounds defensive and doesn't help your case.

    By the way, I do go to the hospital every day and see patients. I don't have a license to care for them personally, but I am in charge of the license under which all of them are seen. When things go well, I always give credit to the licensed professionals. When things go poorly, I am often the one who apologizes for our mistakes or our failures. But I never, ever blame the doctors -- even when they make mistakes. We have a very strong "no blame" policy here because we don't want blame to stand in the way of improvement.

    Best of luck in your career.

    Sincerely,
    T. Guy

    ReplyDelete
  13. I am not a medical doctor, but I can see how some doctors would not want their 'ratings' made public. Unlike baseball, this isn't a game. Each person is a unique PATIENT. Therefore, each patient's prognosis is a result of many factors, not only the doctor's record, which in this case seems comparable to a "winning streak." I am becoming wary of the ranking/ratings that are supposed to be indicative of one's ability to treat people. What about bedside manner? How is that taken into account? Thanks for a thought-provoking post. And, everyone loves a Red Sox analogy...

    A.F.

    ReplyDelete
  14. It's very interesting to see how many different views appear on this topic. In Paul's defense, I do agree that an objective measure of a physician's skill would be a great tool to use for the consumer (ie Patients) but as pointed out, these objective measures would be very complicated to come up with, and have to include many different aspects of patient care, as mentioned in the above post, even post hospital care, which would be difficult for a patient to put together and make a decision. But, as Paul pointed out, a physician may be able to use the data to come to some kind of conclusion. As a consumer, when my parents or grandmother need to see a doctor, I want them to see the BEST doctor, specialist, etc that I can find for them. As a physician (internal medicine), I know that the "best" physicans are those with reputations that come by word of mouth, or people I have worked with. So in reality, I'm not quite sure of a physician's actual talent, just whatever I've heard through the grape vine.

    I do agree that physicans are regulated alot from the outside because we don't take the time to try to regulate ourselves. It is hard with the hours we put in everyday, and even outside of work we still continue to think about the well being of our patients. It seems impossible to be involved in patient care, business management, AND legislature all at the same time.

    But, what about this as food for thought? Surgical patients are always sent to pre-op evaluations done by the medicine docs. Of course, these are for elective cases usually, but just to use an example. The pre-op evaluation consists of a complete history, physical, diagnostic testing such as labs, EKGs, echos, radiology, sometimes angiography, etc. A patient is then deemed low, mod, or high risk for surgery. So by doing all this, we can objectively measure how well does a person have their diabetes under control, their blood pressure, how are their coronary arteries, their lung volumes, their cardiac pump status, etc. By putting values to all these measurements, patients can then be "scored" as the approriate risk level. This is basically what is already done, but no "score" system is used. (this is all hypothetical, and a huge study would have to be done to see if this would work). Now, there is some estimate of a patient's risk prior to surgery as to what kind of candidate they are. Taking that into consideration, and also cosidering what risk category patients a surgeon takes to the operating table, we would have some measure to see if a surgeon takes many high-risk patients and then adjust their results based on that.

    This of course is something that can only be done preop, usualy for elective cases. A patient who goes to the OR for emergent cases doesn't have the luxury of being evaluated to this extent. Also, as pointed out in the above post, even after all this, and all "protocols" are followed pre, intra, and post-operatively, things can happen that could not have been prevented completely (blood clots, pneumonia, infection, etc). Although we take huge measures to prevent these things, they still happen and it's not anyone's "fault", it happens. We try our best, we treat, but complications may happen. Do we punish the surgeon for that? Give them a poor reputation as having too many "complications" when as mentioned above, the patient went home and didn't change her abdominal wound dressings as she should have and now has a huge abscess in her belly??? It's extremely difficult to asses for all these measures, but with proper documentation, and some objective measures, this information could be passed on to the PCPs to analyze and come up with a conclusion for referrals. Many patients would not be able to put all that information together properly and come up with a decision.

    ReplyDelete
  15. Paul: I would further ask how you address the fact that the "risk modifiers" in this study ONLY CONSIDERED CARDIOVASCULAR RISKS even though 30-day mortality includes all deaths. Naturally, patients with severe comorbidities were referried to UCSF. Oh, what's that? You're a severe diabetic in California and you want a CABG? Yeah, sorry, my stats aren't looking as good as I'd like this year. Try more nitro.

    The inevitable reply is "well THIS study is flawed, but the next one...". There has never been a system that can truly control for patient complexity and it is incredibly irritating to see the people who don't have to deal with the fallout from it waving their hands at the huge problems with it and breezily asserting that they'll get fixed eventually. Somehow. Here's an idea: come up with a system that works and doesn't cause perverse incentives first and THEN use it.

    ReplyDelete
  16. Well, I am the one who objected to Paul's last post as being pejorative. This one made me think he still doesn't get it , but I thought I'd hold my fire till I saw what my colleagues think, just in case I was overreacting or being "defensive."

    So Paul, maybe we agree on the general issues but not on the tone of your posts. I don't see the "fun" or even the relevance of comparing a baseball player to a doctor; it just seems flippant to me, as pejorative in your previous post. Previous commenters have made it clear that physicians take their patient's lives and deaths very, very personally; nothing to joke about.

    Here's my idea: all individual hospital CEO's should be publicly ranked by name, by overall mortality rate in their hospitals. After all, they ARE in charge of the "license under which all of them (patients) are seen", and are responsible for all of the non-physician services which are often the true determinants of whether the patient lives or dies. Not all patients have surgery, and indeed, one may legitimately argue the technical surgical procedure is but a minor component of 30 day operative mortality.
    This is a serious proposal, and I hope it is made part and parcel of publicly reported quality assurance statistics. It will no doubt incentivize performance improvement. I am sure Paul Levy will achieve a high ranking, because he's a smart guy - but needs to walk a mile in our shoes.

    ReplyDelete
  17. Paul,

    I wonder what input you would get if you ask the NURSES, who work with these surgeons all the time, which docs are the best. Which surgeons would they choose if they needed a procedure themselves or needed to recommend one to a family member? What criteria did they use to arrive at their conclusion? How does their collective opinion compare to what the doctors think when they talk among themselves?

    ReplyDelete
  18. Dear anon 8:35,

    I guess you are a new reader or else you would know that I have proposed exactly what you suggest. I have proposed that a variety of real, clinical metrics about hospitals be posted publicly for all to see. And, I don't mean the process metrics that you see on various websites, which, by the way are based on administrative data that is 2 or 3 years old.

    Among the metrics I have proposed is the hospital mortality index created by the Institute for Healthcare Improvement - a risk-adjusted metric that is based on nationwide data. Plus real-time data for central line infections, for performance to reduce ventilator associated pneumonia, and the like.

    Beyond proposing, I have been acting by posting our own numbers on several key indicators of hospital safety and quality.

    Here's just one example of what I have put on this blog: http://runningahospital.blogspot.com/2007/05/central-line-infection-report.html.

    Here's another example, where our performance was poor: http://runningahospital.blogspot.com/2007/04/i-want-to-be-proud-but-i-am-not.html.

    We have since created a website to offer these and other data to the public: http://runningahospital.blogspot.com/2007/06/online-with-real-clinical-results.html

    And what is the response from my colleagues in other Boston hospitals, when asked to join in? Not very positive, for sure. Same arguments I have been reading here. "The metrics aren't comparable." "They are not risk adjusted." Even when they are comparable. Even when they are risk adjusted.

    I AM in your shoes. I have put our hospital's performance on the line. I wouldn't ask you to do it if I were not prepared to do it myself. (And in terms of financial incentive, a portion of my salary is based on improvement of hospital quality and safety metric.)

    As for use of humor on this blog, sorry, but that is the way I often like to present things. You take it as flippant and not being serious enough. I take it as having some fun and realizing that even serious topics can benefit from a little teasing (ab)out.

    ReplyDelete
  19. Thanks to BostonMD for his/her suggestion.

    And Barry, you can bet the nurses have opinions as well!

    ReplyDelete
  20. Paul;

    No, I am not a new reader and I knew that would be your response. I've read all those posts.
    But no - I am not speaking of having BIDMC in some ranking vs. MGH or BW, like the U.S. News and World Report or whatever. I am speaking of having Paul Levy's name out there in bold, with his hospital's name in small print, as having the best, or bottom third, or whatever, number of patients die in his hospital. So your neighbors and people on the street and your kids and everyone sees YOUR NAME.
    You and I have previously agreed that process metrics are a poor substitute. And I couldn't care a bit about your salary being related to any performance metrics.
    This is about personal accountability. How many patients did Paul Levy save, or kill, last year by virtue of his competence - published on some state website or nationally. This is what you are asking doctors to do.

    ReplyDelete
  21. Just one additional thought. If I projected myself into the surgeon's shoes, I can appreciate some of the issues they raised regarding complexity surrounding the data. However, I think they should be able to develop reasonable metrics regarding, say, cardiac surgery. Patients could be divided into risk tiers as I mentioned previously. For the adverse outcomes (both complications and deaths), perhaps the surgeon could be allowed to file a supplementary report that would flesh out the circumstances of the case and be part of the record. For example, patient was an 80 year old male diabetic with severe coronary artery disease and below average (for his age) lung and kidney function. He was in the highest risk tier and his pre-op prognosis was poor even relative to others in that group. He didn't make it despite the team's best efforts.

    Perhaps, from a healthcare system cost standpoint, the surgery should not have even been attempted, but that is a whole different subject and discussion.

    ReplyDelete
  22. Fine with me, anon. Check the BIDMC website with the numbers. That's my name, my face, my words in the introductory video.

    And, as you know, my name is already on all those blog postings in which I posted how we were doing on various metrics -- including the IHI mortality index. See http://runningahospital.blogspot.com/2006/12/first-kill-as-few-patients-as-possible.html.

    The minute there is a state website or any other of the type we have been discussing, or even on the current websites, I am pleased to have my name on it if the sponsors would like that. (Of course, I'd like it better if my colleagues did the same. You should check with them.)

    I guess I come from a different kind of background, maybe because I served in the state government. When I ran the local water and sewer authority, it was assumed that I would be held accountable for drinking water quality, wasterwater permit compliance, management of large construction projects, water rate increases, and siting of unpleasant facilities in people's neighborhood. That's drinking water quality for 2.5 million people -- as big a public health responsibility as I can imagine.

    I'm not saying that to brag. I am saying that I bring that same view to this field -- because hospitals are public institutions with a special charter, and the public has a right to know who is ultimately responsible.

    Think of the irony, though, if a CEO is held accountable for the performance of a hospital, but the doctors within it (who are not hospital employees and who are essentially self-governing) are not similarly held accountable for their performance.

    Thanks to Barry for another good thought. As BostonMD was saying, patients are actually already graded by degree of difficulty for surgical cases, so your suggestion could be easily carried out. The American College of Surgeons already does that in their non-public data system. It produces a risk-adjusted summation of how a hospital does on cases (e.g., vascular surgery) compared to what would be expected for the risk profile of the patients who have been treated. It is a thoughtful and powerful analysis.

    ReplyDelete
  23. Paul: I'm afraid you're being rather glib if you can't see the difference between being publically responsible for drinking water quality and having a list of people you killed published next to your name. That's essentially what it's saying - only 4 people should have died, 8 did, QED. I can think of few more powerful motivators than this.. but they're not going to be the kind of motivators people want. When I finish residency, if this sort of scheme is around then quite frankly I'm not taking patients who have a good chance of dying in 30 days. It's certainly better than accepting them and then having arbitrary benchmarks influence my care decisions. Let them subject some other sucker to getting their name written up in major newspapers as World's Worst Physician. I bet it won't take too many of those SFGate articles before nobody will operate on them them at all, and operative mortality rates will plummet! Wonderful!

    ReplyDelete
  24. Paul--
    I think you fail to give doctors enough credit. I think most physicians realize that public reporting is inevitable, and would simply prefer it to be done right. Despite the messianic zeal with which you (and the IHI) advocate for public reporting of individual physician performance measures, there are real problems with the measures currently being reported. The article you linked to provides a perfect example: Dr. Hoopes performs about 26 CABG's a year, so one death in an extraordinarily high-risk patient over a two-year period causes his rating to go from average to poor. Risk adjustment notwithstanding, small sample sizes will always be subject to undue influence by a few cases. That is why, for now at least, reporting outcomes at the hospital level makes more sense than at the individual level. The NSQIP model is explicitly for inter-hospital comparisons, not for comparing individual surgeons.

    And surgery (particularly cardiac surgery) is the area where risk-adjustment methods are the best! For many other fields--including my own, hospital medicine--there simply are no validated performance measures that can be reliably applied to an individual physician. I work at an academic hospital similar to BIDMC, so my group is judged on pneumonia quality of care measures. We do reasonably well on things like using the correct antibiotics and administering them in a timely fashion, but our rates of pneumonia vaccination are low. Now, pneumovax does not save lives; it does not reduce the chance of hospitalization due to pneumonia; it only marginally reduces the chance of the patient getting pneumonia again. But our numbers are low, so in order to fix this, over the last two years we've probably spent more time and effort on pneumovax rates than any other quality effort. There are definitely other quality problems at my hospital that are going unaddressed because of the focus on pneumovax. Until better measures are developed, we might be better off deciding for ourselves what needs to be fixed.

    Finally, you said "But whether there are other apt examples [of individual performance reporting] is not quite the point. Medicine is an intensely personal craft. The individual doctor can make the difference." This is true, but only to a point. Patient safety and quality is largely a function of teamwork and the system as a whole, less so the individual physician. Understand, I fully believe that as a physician I am responsible for my patients' outcomes. But the physician is not the only one responsible. Look at the New York CABG data--hospitals with persistently elevated CABG mortality rates often had systematic problems with the care they provided, not incompetent surgeons. You seem to dismiss hospital-wide reporting because it "doesn't help me choose a surgeon." You, of all people, should understand that the quality of the surgeon and the quality of the hospital are closely linked.

    Let me say (again) that I am not at all opposed to public reporting of performance. However, you should realize that this is a very complex issue with good arguments on both sides, especially regarding individual vs hospital-wide measures. Of course all of us--patients and physicians--want measures that explicitly tell us which physicians do a better job. But those measures don't exist yet. Physicians who object to public reporting often have legitimate grievances. Stereotyping them as mere obstructionists doesn't help your cause.

    ReplyDelete
  25. Perhaps a patient's perspective from someone who recently had surgery at BIDMC...

    This blog made me wonder just how I happened to have one of the best doctors in my specialty operating on me.

    When my primary care physician referred me to a specialist, I said "ok" without question. When this specialist referred me to another specialist, I still said "ok" with no questions asked about who they were referring me to. I of course wanted the best doctor, but I assumed that I would always be referred to the best doctors. I think it's generally the case that patients trust their doctors, right?

    Which leads me to my point...I am incredibly lucky that the first specialist I saw "knew" who was best for me. But what if I didn't live in this medically talented/connected Harvard/Boston area? Would my doctor know who's actually the best? I think that when patients blindingly trust in their doctors, it's only right (and expected) to give them evidence-based information (and I agree with Paul that it's possible to come up with a fair methodology if not in the numbers). I can't believe that I was relying so much on who my doctor happens to be acquainted with...

    That said, I feel bad for the doctors who would end up last on the rankings, no matter how poorly they performed. Is there any way for this transparency thing to not be seen punitively? What if the names of just the best performing doctors appeared publicly, like the top 25%? That way, consumers and referring doctors would know who's up there, while those who didn't make the cut would privately know where they stand but aren't exactly blacklisted and publicly humiliated...

    ReplyDelete
  26. Anon 11:51. I'm not being glib. I was giving a bit of personal background. Please read the context again. I was asked by another commenter to put my name next to hospital measures of safety and quality, and I was explaining why my background made me comfortable with that.

    As to your point "When I finish residency, if this sort of scheme is around then quite frankly I'm not taking patients who have a good chance of dying in 30 days," you are now another data point in support of the conclusions written by Dr. Lee and his colleagues.

    Sr, thank you. I haven't dismissed hospital-wide reporting. See above. I have strongly advocated it. I think it should be supplemented with physician-specific reporting. I agree with you regarding hospital medicine, by the way, and also about several of the currently required reporting metrics. I think there are very good measurements of surgical results, though, as I have mentioned.

    Anon 1:12, thank you. People don't have to be "last", but they could be grouped by statistically valid quartiles -- if the data show a strong enough statistical variation to create quartiles.

    ReplyDelete
  27. The problem that I see here is that some docs don't see themselves as "regular" people. They are above the rest of us. We are all guilty of allowing this or becoming a part of it. Now they are beginning to be held accountable, just like the rest of us, and it is not welcomed. The facade is crumbling. I would also lose the Dr title and merely add the MD at the end of your name like the rest of us. It's just the job that you chose to do and the public trusts you to do your best but we now want proof.

    ReplyDelete
  28. I believe that public reporting should be designed in a way that mirrors the standards of the scientific literature. In other words, it should be readily apparent when the difference between two numbers is statistically significant or not.

    I am a physician and I agree that some form of public reporting is inevitable, and I also feel that it is important for patients to have access to objective information. I thus think that clinicians, as opposed to simply objecting to the trend towards public reporting, should be advocating for sensible presentations. Those presentations that emphasize whether there is a meaningful difference in performance, rather than simply presenting numbers side by side, make the most sense to me. Given the issues of risk adjustment, presenting raw numbers ONLY can be worse than no information as incorrect conclusions could be drawn. Dr Lee points this in his article that when dealing with small volume numbers a few bad outcomces could quickly adversely affect the calculation of a rate. But because of the small numbers it would not be a statistically significant difference. We would not allow this in the scientific literature to be held up as a true difference.

    On the other hand, if we wait for perfect risk adjustment, that day will never arrive. Similar to the scientific literature, a conclusion might be drawn based on a statistical difference, but the discussion will note that risk adjustment is not perfect and that bias could theoretically have influenced the findings.

    But if public reporting venues emphasize whether performance is "as expected" or outlier based on statistical methods, as oppposed to absolute numbers, the potential for misinterpretation is reduced.

    ReplyDelete
  29. Paul;

    What's lost in all your "messianic zeal" (I love that one), is that the SF Gate article itself raises questions about the validity of the data - the results re UCSF contrast with both the "high regard" ranking of long duration and the U.S. News ranking, however valid the latter may be. At least one of the surgeons got the strongest possible affirmation from another prominent surgeon in the area ("I'd let him operate on me.") The article says 2 of the patients died of unrelated causes within the 30 days, yet were counted in the operative mortality.

    Doesn't this give you cause to wonder yourself if the state metrics are accurate? So it's OK to ruin someone's reputation in public on the basis of inaccurate statements? In other venues that would be called libel. Oh wait - I'm not a lawyer, I'm a doctor. Maybe malice is required. I am beginning to wonder about malice on this blog.

    ReplyDelete
  30. To set the record straight, I am responsible for comments 8:35, 9:56 on 7/16, and (unposted as yet) about 9 am July 17. So Paul, you're not just arguing repeatedly with the same demented commenter, there are a bunch of us anons. I am also retired, so none of this affects me directly. I am still in favor of individual rankings provided to the hospital medical executive committees for privileging and to PCP's, but Paul, your comments on this post have changed my position from one of being willing to explore public reporting, if accuracy can be attained, to one of, no way. If you don't see it yet then neither will the public. Let the PCP's decide which, by the way, is how it happens now anyway!

    ReplyDelete
  31. Glad to have been so persuasive! :)

    So, we all seem to agree that (1) PCPs should have access to risk-adjusted clinical results for specialists to whom they might refer.

    (2) Hmm, maybe it's just (1).

    ----

    But one final coment to anon 9:17. This is not about me or my ability to make sense of the issue or even be persuasive. It is about a greater societal trend that doctors can either get in front of and help frame in a positive way or get steamrolled by legislated solutions.

    And, anon 8:21, if you want to impute malice on this blog, feel free, but in so doing you are falling into a bad trap of denial of the point raised just above. The doctors in my hospital and in this community know that I am not malicious in any sense towards them and am incredibly supportive of what they do. Some agree with me on this issue, and some don't, but we don't accuse one another of malice when we have different opinions.

    ReplyDelete
  32. Holy cow, what a mass of thought and commentary. I've been friends with (and patients of) many caregivers and administrators in the past 30 years, so I see both sides of this pretty clearly, I think.

    And, being in my 7th hospitalization of 2007 (at BIDMC), with quite real implications for my working life and life outcomes, the simple question is always there: "Where's the best provider?"

    I don't think anyone is saying "That's none of your business ," but I don't see anyone pointing me to the list.

    ReplyDelete
  33. It sounds to me like a number of commenters here could use a Fenway Frank.

    Then again, maybe that would put them in a higher risk tier...

    ReplyDelete
  34. Two thoughts:

    I want to echo Paul's statement of the urgency for providers to get involved in quality metrics. The demand is there and growing. The realistic choice facing providers is between ensuring that the best possible metrics are used or falling behind the curve and suffering the fate of public school teachers.

    Secondly, many of the doctors in the above comments show great anxiety over people learning about the cases that just went wrong under their care. I think it is important to give the public some credit here. Just as we, to return to the opening analogy, not only understand but celebrate the fact that David Ortiz FAILS to reach base more than two-thirds of the time, we understand that doctors, nurses, and hospitals are not perfect. Things go wrong. We know. Of course proper risk adjustment and consideration of sample sizes is needed, but I don't think doctors need to have such anxiety about how they will be judged by a few cases.

    ReplyDelete
  35. Why are they sensitive? Because their craft is unimaginably complicated, incomprehensible to lay people, and often a matter of judgment and experience that defies measurement.

    That's why not everyone is a doctor. A pilot has an aircraft. Its design is known. Its failure points are (mostly) known. Yes, s/he may have two hundred lives in hir hands, but almost all the variables have been worked out.

    Human beings, and human bodies, are works of natural art, not an engineering project.

    Which (to harp on the subject) is why healthcare is NOT a "consumer market."

    That said. It's helpful to know more about potential outcomes, since, let's face it, Doctors can't be sure about them either. All we have IS statistics.

    ReplyDelete
  36. Just to echo a bit of what james says above, give the public some credit in being able to sift through the data that is reported.

    And heres a challenge to the docs that have been posting saying this is such a bad idea, come up with a better system that takes into account risk factors and comorbidities,etc. The current metrics aren't perfect, but its better than nothing and its the public who wants this. Public reporting is not going away.

    ReplyDelete
  37. To the doc who said "When I finish residency, if this sort of scheme is around then quite frankly I'm not taking patients who have a good chance of dying in 30 days" I would like to thank you on behalf of myself and any of your future patients.

    Speaking as a potential patient, I'd like to think that if my doctor was worried about his ability to care for me he'd refer me to someone he felt could do a better job, or at least tell me he's worried you can't do the job adequately.

    However, in the absence of that, maybe these "schemes" will do some of the heavy lifting.

    New York seems to have benefited greatly from cardiac surgery outcomes reporting, although the jury is out on whether the poor-performing docs stopped practicing or simply moved to a less transparent state. Either way, the statewide mortality rate post CABG has plummeted.

    http://www.hanys.org/communications/pr/2005/103105_pr.cfm

    Quality reporting drives quality improvement, pure and simple. Consumers getting something to look at is (at this point anyway) a bonus.

    http://content.healthaffairs.org/cgi/content/abstract/24/4/1150?maxtoshow=&HITS=10&hits=10&RESULTFORMAT=&fulltext=Hospital+Performance+Reports%3A+Impact+On+Quality%2C+Market+Share%2C+And+Reputation+&andorexactfulltext=and&searchid=1139907897953_80&FIRSTINDEX=0&resourcetype=1&journalcode=healthaff

    J. H. Hibbard et al. (2005) Hospital Performance Reports: Impact on Quality, Market Share, and Reputation. Health Affairs 24, 150–1160.

    For those who are genuinely interested care, here is a link to the Cardiac Surgery report, it contains a full, open methodology. See page 13 for variables.

    http://www.health.state.ny.us/diseases/cardiovascular/heart_disease/docs/cabg_2002-2004.pdf

    ReplyDelete
  38. As a nurse who has been around for a long time (since 1978), medicine has always been a 'secret' society. It has only been recently that patients are allowed to have access to their records let alone the data concerning an MD or a hospital. Medicine I believe was probably the initial profession that utilized acronyms/abbreviations, why to keep things from non-medical people hidden. This is going way back but there actually used to be an acceptable standard acronym/abbreviation of {PIA} or {RPIA} that was actually written in patient's records. I think you can guess what that stood for, many people don't know what the 'R' stands for, it is for ROYAL. Now that would never occur and that is a good thing as it should not have been used in the first place, but it was the norm years ago. It takes a while for people to adjust to type of disclosure that is now occurring and some people can never accept it, but there will not be a choice any longer to continue to outdated thought of keeping things within a private circle of members.

    ReplyDelete
  39. Dear Jaz;

    While it is true that mortality has plummeted in NY state, one would have to do a study to see if the higher risk patients are now being denied surgery, as Dr. Lee et al feared. This would also result in a plummeting rate attributed to cardiac surgery - it would just be attributed to deaths from heart disease instead.

    ReplyDelete
  40. Dear Anonymous, given the exclusion criteria and very generous number of risk factors, not to mention any second procedure dropping the patient from the sample, don't you think that leaves very few patients at risk of being considered "too" risky?

    Here's a quote I find alarming and relevant:

    "Surgeons became quite creative in finding ways to keep their patients out of the data sample. David Brown of SUNY–Stony Brook remembers a patient from 1999, a man in his early fifties who was athletic, a bicyclist, whom he referred to surgery for a bypass.

    On paper, the man was a low-risk patient—young, healthy, with just one vessel that needed repair. For some reason, however, the man went into cardiac arrest while being put under anesthesia. If he had died, the Department of Health would have scored the death with a very high mortality and no risk adjustment. But the man survived, and a week later Brown glanced at the report and noticed that the surgeon had performed an additional procedure while the patient was on the table.

    “He did a mitral annuloplasty, which is putting a little ring around the mitral valve,” Brown says. Because of this surgery, this patient no longer could be considered for the state data; he was knocked out of the sample. If the patient died, it wouldn’t affect that surgeon’s mortality rate. “I called him, and he sort of hemmed and hawed about it,” Brown remembers. “I was going to report it, because I thought it was assault. Certainly it was done strictly to manipulate the data.”"

    I would happily give up all risk-assessed data in exchange for the name of any surgeon who feels it in his or her power to perform unnecessary procedures on my human body to avoid peer review.

    Deaths will occur. We are mortal. Nonetheless, what I should have said instead of saying the mortality rate has plummeted, what's more important is that the variability between surgeons and hospitals has significantly declined.

    If less people are getting surgery that wouldn't have kept them alive for 30 more days anyway, well, that's a debate for the health economists and utilisation experts. I'm in no position to state my desire or lack thereof for heart surgery to prolong my life 28 days, I've never been in the situation. Honestly though, I can easily imagine not wanting it.

    And I stand by my personal choice to enjoy being declined service by anyone who fears their intervention will report out poorly. It's not the only reason I believe in public reporting, but it's up there on the list.

    If you are so worried I won't fare well under your knife, I say pass me on.

    ReplyDelete
  41. Dear Jaz;

    I am anon 9:28. While I don't exactly understand your first sentence (are you saying everyone high risk has already been excluded?), I have no quibble with your general sentiment. I am not a surgeon; in fact, if anyone came under my knife, they would already be dead. (:

    My point is just that all laws or incentives or rules must be examined for unintended consequences, and this is one example. I attempted to read the report you cited despte my lousy dialup connection here, and it seems there is a gold mine of data there which could be mined for additional study on such possible consequences. Why not let these reporting states serve as pilot projects for some years, and study the data instead of theorizing, or has this been done and I am not aware of it? (How long has NY been doing this, anyway?)

    ReplyDelete
  42. Thanks anon 9:28 (can't people use pseudonyms any more? I'd much rather address a "Bob" than "THX 1138")

    My first sentence was trying to answer the knee-jerk reaction that "all patients are different", these reports are built by consensus and involve specialists who make very difficult decision about who to include in the sample. They do a thankless job of trying to find a sample of comparative cases, albeit after extensive risk adjustment.

    Nonetheless, yes, many patients are considered too out of bounds for comparative analysis.

    New York has been reporting these numbers since 1994 or maybe 1995 if I remember correctly. Of course, the risk factors have been tweaked over time, and the criteria and risk factors are monitored constantly. None of these reports are "fire and forget", huge amounts of work go into ensuring their validity.

    Pennsylvania also reports on cardiac surgery, at least ten years if not more.

    New Jersey and California do pretty much the same but I don't know how long they've been publicly reporting.

    There may be more, I may have missed a couple. I think the Florida state Web site has mortality rates.

    All of this misses the point that every hospital, hospital association and state health department that has the data does their own reporting, just not for public consumption.

    So, long story short, cardiac surgery public reporting has been around for about fifteen years.

    j

    ReplyDelete
  43. Speaking of NY, here's another interesting facet of this issue from "The doctors weigh in" blog:

    http://www.thedoctorweighsin.com/journal/2007/7/19/should-we-have-health-care-performance-transparency-by-whom-.html

    Funny he makes his comments concerning transparency about NY, which jaz points out has been reporting on cardiac surgery for some time.

    Tangentially, I think Bill McGuire (former CEO of UHC) is the most egregious example of a doctor selling out his Hippocratic oath for profit. His license should be lifted for ethics violations.

    ReplyDelete