Friday, July 20, 2007

More on surgery disclosure

Following our discussions below, here is the link that presents cardiac surgery results for individual doctors in Massachusetts. It covers 2002 through 2004 and presents the 30-day mortality for isolated coronary artery bypass graft procedures, and there is a risk-adjustment aspect to calculation.

Please note that the chart does not given the numerical mortality figures for each doctor. Instead, it groups doctors into three statistically valid groups: mortality significantly higher than state average; mortality the same as state average; and mortality significantly lower than state average. It also shows how many cases have been done by each doctor during this period.

There is also a link that groups this information by hospital.

I don't know about you, but I think this is pretty well done. It has the same kind of statistical validity that doctors would expect among themselves, and yet it is available to the public. It presents information in a way that would be useful to me as a consumer. (It is a bit out of date, but I am guessing physician performance usually does not change by all that much year to year.)

Now that you have seen it, do those of you who have objected below still have objections? Specifically, do you feel this would cause doctors to avoid certain cases? Do you think consumers could not understand this information?

To those of you below who wanted information, does this give you what you want?

10 comments:

  1. This was "easier" than I thought it would be. So many times I see public data and awards with some cryptic hard to understand methodology. The star ratings keep it simple, send a message but a message you can't over analyze.

    Thats what really gets to me though, "Will the public understand it?"

    Take for instance an award given in Louisiana for quality. The award is based on core measure improvement....not maintained high quality. The public doesn't know that. I chuckle every time I pass a competing hospital. They have a huge banner out front for their Louisiana Gold Award, which means they were worse and got better. My hospital has maintained 90%+ performance and got a silver award. Even if we had improved to 100% we would not have met criteria for a gold, because we would not have met the 'improvement' standards.

    Back to my point at hand, the public doesn't know that. It was a hot topic in one of my classes given the push for transparency in quality and prices.


    Then there is the whole debate over doctor's taking risky cases. I ask the question, if doctor's would document their cases to the point they should, then wouldn't the risk adjustments properly reflect their cases?

    (PS, I'm 25 year old "youngin," my opinions might be wrong or not have a great foundation, but I'm trying to build a foundation here in taking part on this blog discussion)

    ReplyDelete
  2. Hi,

    What about doctors who regularly treat more complicated cases? Is this ranking fair to them?

    Yigal

    ReplyDelete
  3. Both links provided useful information and were easy to understand. Based on these particular data sets, the conclusion I reached was that, with the exception of, I think, two surgeons who received below average marks, a patient would be in capable hands with any of them. I would also be interested in the number of deaths that are associated with each of the three performance tiers.

    Intangibly, it makes me feel better to know how many procedures each surgeon performed over the time period. It beats having to wonder if my surgeon only does the operation occasionally or does more than enough to stay at the top of his game. The same is true for the hospitals. A hospital that does many of a particular operation suggests that the support team is also at the top of its game. A general indication of the minimum number of procedures per year a surgeon and support team should do to maintain proficiency would also be of interest.

    ReplyDelete
  4. In response to Yigal:

    Most all measures I've seen used are risk adjusted. Meaning age, case severity, additional conditions of the patient all go into a risk adjusted mortality rate.

    So it helps the doctor to document everything wrong with the patient in his notes. This way the patient is properly risk adjusted. That way a risky patient has less bearing on their scores.

    ReplyDelete
  5. This is a great start. The public should be made aware of its' existence. The only reference I could find when I needed a surgeon was the Massachusetts Board of Registration's physician profile site. It did not prove helpful.
    I would like to see more procedures investigated and morbidity at 1, 3, 6, and 12 months. Not every complication ends in death. Some permanent complications tied to a surgeon's inexperience don't appear until after 30 days. Trust me, I know.

    ReplyDelete
  6. You may be aware of this, but the NH government is currently debating a bill that would mandate hospitals to disclose stats such as central line infection rate, nosocomial infection rates, etc. I know this is a tad off topic, but transparency all around is a good thing.

    ReplyDelete
  7. I am the "demented commenter." (: I was just able to read this. It looks pretty straightforward, but let's analyze a little bit.
    First, the footnote says it's mortality due to all causes within 30 days. I don't know if this is the standard way to define 30 day operative mortality or not.
    Second, Paul as the statistician would have to comment on whether one can tell from the # of cases and the time period whether or not one or 2 deaths more or less would throw the surgeon into a higher or lower category; I cannot tell. They do seem to try to control for this with their footnote about small # of cases with some surgeons.
    Third, as with many of these types of measurements, the vast majority are "average", diminishing their discriminatory value. It would be interesting to me to know if Dr. X attained an "average" rating despite taking more risky cases and therefore is really superior or not. Or would this already be reflected by the risk adjustment?

    All in all, I can't argue except that I disagree that the surgeons' performance may not vary year to year. If they are young they should be getting better with passing years. I would be interested to hear what the rated BIDMC surgeons think of this data; all of whom scored average.

    ReplyDelete
  8. If the vast majority are average, in a sense of statistical significance, that is useful information in itself.

    (I agree that it would be nicer to have current numbers, too.)

    ReplyDelete
  9. I've still been thinking about this; it seems if one is going to use the taxpayer's money to compile this data, one should obtain more useful information than 2-3 outliers in each direction. I assume they calculated the average mortality and then used 2 SD to determine 1 or 3 stars.

    I think, first, they should publish the actual average mortality rate for the state and compare it to other reporting states (assuming they used standard methodology, which should be a given) - if it's 5% for instance, either the state has a problem or the risk adjustments don't work. If it's 0.1%, other states should be learning from Mass., or the risk adjustments don't work.

    Second, since there are so few outliers, it would seem that prior to publishing the data, they should review each death of the outliers (there would be few) to verify each death was correctly put in the 30 day op mortality category, or if there is a pattern to the deaths, or lack thereof, that would explain the outlier.
    I know you could say leave this to the hospitals, but they have a vested interest. This is only fair to the surgeons who may be placed in a bad category due to error.

    Third, they should trend this data over time. If different surgeons keep falling in and out of outlier status (as opposed to the same "bad" or "good" ones), this would diminish the credibility of their rating methodology.

    Last, I know at least one of these surgeons has been at the Cleveland Clinic since 2004. Eliminating those people by ascertaining who is still practicing in Mass would make more useful data for the Mass public.

    There is one piece of good news; grouping it by hospital means there is not one "bad" hospital with higher mortality rates than the others. Interesting, given the no doubt wide variation in volumes, which are supposed to produce better results if higher.

    ReplyDelete
  10. If the goal is 'transparency', I think the Mass. report falls short. NY has been at this a long time (since 1989) and become comfortable with quite complete disclosure (see http://www.health.state.ny.us/diseases/cardiovascular/heart_disease/docs/cabg_2002-2004.pdf), including wonkish info that allows more sophisticated users (including referring physicians?) to look at specific performance (#cases, #deaths, expected mortality rate, observed mortality rate, rate w/ or w/o valve procedures, etc). The 'three-star' representation chosen by Mass. hides significant information and is usually selected by a committee that wants to suppress evidence of variation rather than expose it. The Mass. report raises suspicions of such manipulation by its rating of only 2/56 surgeons as below average and 1/56 as above. Even choosing a conservative 2 std deviations cutpoint from a normal distribution would leave 1/6 as above and 1/6 as below (i.e., about 9 3-star and 9 1-star surgeons in the state). The Mass-DAC technical report makes it clearer that the table cited in the blog link is showing only "outliers", not the full distribution. Typically, statisticians advise conservative committees to pad the cut-point to clump more docs into the average grouping to avoid liability or giving offense. The NY approach allows the reader (or a secondary publisher, like a newspaper or trade publication) to make their own decisions about what's significant. As a patient or family member looking for a surgeon, the Mass. report is a useless exercise. There's no value in publishing a report that says every surgeon (but 3) in the state performs the same - and it's counter-intuitive. Mass-DAC does provide more detailed information (at http://www.massdac.org/reports/SurgeonSpecificRates2002to2004.pdf). The table on page 6 shows, for example, that Cary Akins actually had a higher risk-adjusted mortality rate than Robert Moses (who is one of the 2 outliers) but due to the confidence interval surrounding his SMIR, he is not shown to be "different" from the average (probably due to case volume?).
    I'd mention a couple of other issues. Even though it's become common practice, I think operative mortality is a lousy outcome measure for CABG. The outcome of interest is pain relief and functioning - that's why the vast majority of procedures are done. Operative mortality may be a patient safety measure, and that's fine, but it's should not be the sole available measure of clinician performance. The default survival rate is 98%, after all, and the state is telling us that 54 of 56 surgeons perform the same. So what is a patient really looking for? What makes a "good surgeon"?

    ReplyDelete