The American College of Surgeons, the preeminent surgical organization in the country, has developed a superb program to measure the relative quality of surgical outcomes in hospital programs. It is called NSQIP (National Surgical Quality Improvement Program) and is described in this Congressional testimony by F. Dean Griffen, MD, FACS, Chair of the ACS Patient Safety and Professional Liability Committee.
What makes this program so rigorous and thoughtful is that it is a "prospective, peer-controlled, validated database to quantify 30-day risk-adjusted surgical outcomes, allowing valid comparison of outcomes among the hospitals now in the program." In English, what this means is that it produces an accurate calculation of a hospital's expected versus actual surgical outcomes. So, if your hospital has an index of "1" for, say vascular surgery, it means that you are getting results that would be expected for your particular mix of patients. If your NSQIP number is greater or less than "1", it means you are doing worse or better than expected, respectively. (As I recall, too, an index is derived for each individual surgeon, but I might not be remembering that correctly.)
The program also gives participants a chance to see how they are doing relative to the other hospital participants. Are you in the top decile, the top quartile, or the bottom quartile.
This is a powerful and thoughtful tool, and the ACS deserves a lot of credit for their work in putting it together and making it available throughout the country.
But (and there is always a "but"), the ACS does not go far enough. Despite their assertions about a desire for transparency in medical matters, the NSQIP reports are not made public by ACS. Further, participants pledge not to make their own data public.
In an exemplary statewide program in Michigan, in which hospitals and Blue Cross Blue Shield of Michigan are cooperating on statewide implementation of NSQIP, we find the following:
Aggregate data on the impact of the project will be made available to BCBSM and provided in public reports about the project. However, the individual hospital data will be available only to the participating hospital and its surgeons for quality assessment and improvement purposes.
I do not know if there have been debates within the ACS on this matter, but this decision seems to reflect a belief on the part of at least some surgeons that the public is not ready and cannot understand this kind of information -- that the NSQIP tool is very useful for quality improvement efforts within a hospital, but that it is not appropriate to share with the public.
Here is a recent conclusion from an article in the Annals of Surgery that exemplifies this point of view:
At this time, we think that, for most conditions, surgical procedures, and outcomes, the accuracy of surgeon- and patient-specific performance rates is illusory, obviating the ethical obligation to communicate them as part of the informed consent process. Nonetheless, the surgical profession has the duty to develop information systems that allow for performance to be evaluated to a high degree of accuracy. In the meantime, patients should be informed of the quantity of procedures their surgeons have performed, providing an idea of the surgeon's experience and qualitative idea of potential risk.
I think this aspect of the ACS program is wrong and leaves a lot of the value of this program on the table. I believe that the public has a right to know -- and can fully understand -- the NSQIP results, at the hospital level at a minimum. While I respect that there is a debate about the disclosure of doctor-specific data during the informed consent process right before surgery, I believe that this information should also be available to the public to help them choose surgeons well before they sign a consent form.
Perhaps someone from the ACS will comment as to why they have imposed this gag order. Perhaps individual surgeons out there will explain why a tool that is sufficiently valid to use for their own quality improvement programs is not sufficiently valid to present to the public. Further, why should a participating hospital be prohibited from displaying its own information to the public?
Being a surgeon in a multidisciplinar medical world, I ask myself:
ReplyDelete1. Do surgical outcomes for a single surgeon depend exclusively on his/her performance?
2. Can we accurately say, in the so-called "chain of health care delivery", that the surgeon is the one doing or not doing his/her job correctly?
"Do or do not, there is no try" Yoda
Julio M. Mayol MD, PhD. Madrid - Spain
Querido Yoda,
ReplyDeleteOf course, more people are involved, but if I ever said to a surgeon that he is not ultimately responsible for his patient, he would call me crazy and feel insulted.
But if you don't like individual performance data, the NSQIP hospital-wide metric deals with the hospital as a whole.
I wonder how surgeons view the quality of the individual risk scoring mechanism as it relates to accepting or trying to avoid the highest risk patients. If surgeons themselves view the overall risk scoring mechanism as credible and fair, the data, at least at the hospital level, should be disclosed. For individual surgeons, perhaps that data should only be disclosed once he or she has performed the procedure enough times to reach a minimum threshhold number. If his or her cumulative number of procedures is below the threshhold, that, in itself, is valuable information that the patient should be aware of.
ReplyDeletePaul, can you give more information on the measures they use? I think it's important to distinguish between those that are surgeon-specific and those that are more general for the hospital. The quote you posted has a point - patient outcomes are more closely linked to the number of procedures a hospital has done than to the number an individual surgeon has done (as suggested by Dr. Mayol's questions). This reflects the team effect that you've discussed before.
ReplyDeleteJust because a surgeon is at the sharp end of the scalpel, as it were, doesn't mean that many other characteristics of the hospital and OR environment and staff don't have a big effect on surgical outcomes.
And regarding quality improvement vs. public reporting of hospital-wide measures - as with infection rates, there are probably some technical aspects to adjusting risk that can lead to difficulties in interpretation for outsiders, or even nastiness for the hospital. Not that I'm advocating defensive medicine, I'm just saying it's likely to be a problem given the unfortunate climate we still live in.
Finally (and sorry to be so wordy today) what does "ultimately responsible" really mean anyway, and why does it so often come down to an individual?
Is BIDMC a participant in the ACS program? If so, how is BIDMC gagged? These professional organizations aren't government agencies. Who gave them such power? Take it back.
ReplyDeleteIt would be very interesting to see some progressive hospitals, such as BIDMC, publish their own data. After all, the information is likely a result of your own work product, is it not?
Also, if I were a patient with my outcome fed into the data stream, I ought to have a say where it goes.
So I'd want the results available to the public, not a bunch of stuffed suits who use it to wield power and assure each others' wealth.
Are these "colleges" working for the best interests of the public? I rather doubt it in this case.
Anon 1);23,
ReplyDeleteYes, we are in it, but you have to abide by the rules to get access to their national data base.
Emily,
ReplyDeleteThe key measure is straightforward: 30-day post-operative mortality, observed versus expected ratio, based on the risk adjusted mix of patients.
Emily,
ReplyDelete"Ultimately responsible" means what it says. The surgeon has responsibility for the care of his or her patients. Ask any surgeon. Rest assured that they view the world this way. Ask any patient. You will get the same answer.
"Ultimately responsible" means surgeons accept accountability for the outcome, but it does not mean that "the" surgeon is the one not performing at the highest level. A surgeon may be a "star" surgically speaking, but others in the team may not...
ReplyDeleteMy point is that surgical outcomes reflect the "hospital's outcomes", which includes the surgeon factor as one of the variables, of course.
In this imperfect world, best hospitals will consistently have better outcomes. But with a limited number of "best hospitals" these data will only remind patients that there are "fortunate people" who always do better, even if they are sick.
Safety is a major concern for all of us, but raw data is not information. And information without accurate interpretation is not knowledge.
Data can be misleading. And when people is misled, they usually aim their anger at the wrong target. Then, surgeons feel unfairly treated and defend themselves... A very bad vicious circle (defensive medicine) that increases costs and decreases safety...
There must be smarter ways to increase safety without picking out a culprit...
Julio Mayol
Estimado Julio,
ReplyDeleteI really appreciate your regular participation in these discussions! Perhaps someday we will meet in person.
I don't see this a finding a culprit. There IS a difference among surgeons, no? I know that surgeons talk among themselves as to who is better or worse than the other. I know surgeons, too, have no problem taking ownership and talking about their patients' mortality rate when it is better than average.
I absolutely agree with you, though, that the "rest of the hospital" can make a difference in outcomes. The NSQIP hospital-wide metric captures that.
My pleasure. I find your blog very stimulating.
ReplyDeleteMoreover, the BIDMC is very dear to me. I was a research fellow in Surgery in 96, when my son was born at the BI. So I share some BI culture values.
Anyway, I am trying to act as devil's advocate.
Every human decision always hides a dark side...
Julio Mayol
Paul,
ReplyDeleteThis is a good post.
As you know I am a surgeon who publishes my data and am as open as possible with patients.
The problem with these statistics from a surgeons perspective is how it will change practice habits if they are published. Although there is an attempt to adjust for results based on patient risk factors, this will unlikely be accurate.
For example, in my practice a straightforward robotic prostatectomy is an easy operation with predictable results. A robotic cystoprostectomy for bladder cancer is a more complex operation with a much higher complication rate with a more unpredictable postoperative course. My results for these have not been great despite having no problems with the surgery.
if my results were scrutinized to the extent everything was published on a web site, I would no longer do this operation on high risk patients.
I am gathering that this tool is a relatively new development. It would seem that a period of internal validation of the tool by hospitals would not be unreasonable, prior to public reporting. I do not, however, understand the comment that its validity for individual surgeons is "illusory". Given several surgeons performing the same procedure in the same hospital, with presumably the same equipment and staff, comparison of outcome data between surgeons should be valid over time - unless I am missing something?
ReplyDeleteI believe that New York State has a public report card on cardiovascular surgery outcomes (30 day mortality), including surgeon-specific data. As I understand it, most surgeons who were distinct outliers either ceased doing those procedures or left the state (I think this was in the New England J. of Medicine within past year)
Also, the Cleveland Clinic heart center provides public outcome data on many of its cardiac procedures (see its website), although I do not believe it is surgeon-specific - yet.
Expanding on my previous comment, the reference for the state public report cards is: Steinbrook, Robert, M.D.; "Public report cards - Cardiac Surgery and Beyond"; New England Journal of Medicine 355:1847-1849, 2006. I risk a small quote referring to NY State, for those without a subscription.
ReplyDelete"Patients who picked a top-performing hospital or surgeon were about half as likely to die as those who picked a hospital or surgeon with a ranking near the bottom. Surgeons whose CABG operations were associated with the highest mortality rates were much more likely than other surgeons to stop performing CABG surgery in the state within 2 years after the release of each report card...."
"Public report cards are not going away. Indeed, they are likely to become more common and to cover both physicians and institutions, as well as additional surgeries, other procedures, and medical conditions."
The ACS has an opportunity to be ahead of the curve, or to be behind it.
Although it is certainly true that outcomes depend on a team, a demanding surgeon will not tolerate a post-operative team that does not meet his or her standards.
ReplyDeleteAs a surgeon who works at BIDMC I can tell you that I always feel ultimately responsible for the results of my surgeries. Part of being a surgeon is being a leader, and leaders are always responsible for the results of their team or organization. To feel any other way abdicates your responsibility to your patients. It also gives you the moral authority to demand the best from all members of the care team, and the responsibility to take the lead in correcting mistakes or deficiencies. No person or organization is perfect and even the best surgeons have complications and bad outcomes from time to time. Admitting them, learning from them and taking appropriate corrective action makes you a better surgeon and leads to better outcomes. Whether or not your personal actions caused the bad outcome is irrelevant. All good surgeons adhere to these principles. It's fundamental to our learning and teaching proccess.
ReplyDeleteI have no problem with the public being made aware of the NSQIP data.
I agree. I buy it. We, surgeons, feel ultimately responsible for our surgeries and our patients. With all due respect, I am going to use an expression that I learnt at the BI some years ago, SO WHAT?
ReplyDeleteI am human, I err, with the patient, with the team and overestimating my capacity to control and judge others' actions.
Somehow, these argument reminds me of the cross-examination scene between Colonel Natham Jessep (aka Jack Nicholson) and defense attorney, Tom Cruise, in A Few Good Men:
If I am a good surgeon and I feel responsible for my patients and I demand the best from my team, complications would be unavoidable and, thus, there would be no need to measure them...
No offense.
Anyway, I am only worried by the internal and external validity of the tools to identify the source of errors and adverse events and by the impact of its potential misuse.
As a surgeon, I have a personal commitment with my patients and I am more than happy to disclose relevant information to the best of my knowledge (mortality, morbidity,5-year survival, SSI rates...). However, in order to have more precise data about my postoperative mortality, it would be necessary to know where, when, how and why it occurred in every case...With all that information, I am sure we'll all learn.
Julio Mayol
I think sharing of this data would be useful and valid, but I hold the belief that the entire health care system should be much more transparent (even more so when it comes to health care economics).
ReplyDeleteI do worry that the numbers are valid vis-a-vis risk-adjusting for different patient types. I have heard rumors of transplant surgeons who cherry pick patients to maintain extremely good outcome results and thus maintain funding/credibility. The ethics of that aside, unless the risk-adjusting is perfectly done, what would stop people from only taking easier cases to keep their numbers up?
Jon and Domenico have now brought up this issue of surgeons not taking on difficult cases for fear that their bad numbers will be driven up. What is this all about? Our surgeons at BIDMC take pride in taking on the more difficult cases and also view it as part of their professional responsibility. Are they that different from the norm? Comments please.
ReplyDeletePaul, BIDMC surgeons are excellent. But humans make decisions for reasons that, many times, are beyond our and their comprehension.
ReplyDeleteJM
Paul;
ReplyDeleteI am offering my opinion as a retired pathologist, not a surgeon, but I am married to one, interacted with many, and served many years on medical executive committees, etc. where these issues are discussed. I think all docs (myself included) are wary of being judged by statistical outcome data, mainly because we fear that the data may not accurately measure how we perform, or will be somehow skewed to "get" us by insurance companies, government, etc. Surgeons, in particular, are heavily anecdote-biased - in that, to them, each case is unique and its details are important for an understanding of how it unfolded and how a particular outcome was achieved. This is captured in the typical morbidity/mortality conference, where all details of cases are aired - but not in dry statistics.
And not all negative outcomes are anyone's fault - some patients just have complications which are statistically predictable in a population of patients undergoing that procedure.
However, I have been involved in cases where a surgeon was clearly a (negative) statistical outlier compoared to his peers within a given hospital. Everyone "knew" that he was a substandard performer, but it took the statistics to give the medical staff the courage to finally officially confront him. So clearly, these statistics have their place.
So yes, I could see surgeons cherry-picking cases to improve their numbers, particularly if they perform few cases of a given procedure or their practices are marginally successful for other reasons anyway. And I could see this happening more in private practice than in an academic setting such as yours. But, I can't see this happening on a widespread basis once the publication of these statistics becomes the national norm. Right now, it represents "change", always threatening and especially so in the current beseiged mentality of all physicians. (And that mentality is justified in my opinion, I might add - one reason I retired at age 52).
Thanks anon 8:40,
ReplyDeletePlease explain. Why do so many doctors think they should not be judged by society for their clinical expertise? You call it feeling "beseiged" and even retired because of that. What is is about medial training that makes people think they should not be held accountable by something other than anecdotal evidence? I have yet to see a metric that most doctors think is fair, notwithstanding excellent work by IHI and other places to create statistically valid approaches. You folks need to get past that and understand that society has a right to ask these questions. With the government paying about 40% of health care costs, you should also expect legislators to ask them. Sure, it will not always be fair, but who promised that life would be fair?
Paul
ReplyDeleteI am anon 8:40. Thanks for giving me the opportunity to clarify. I detect a wee bit of frustration in your 9:06 response. (: I hope my reply doesn't run on too long.
First, let me say that I am also the person who posted the earlier comments citing the NEJM article on the public report cards, so I actually agree with you that there should be public reporting of outcome statistics. But you asked for comment about why someone might "pad" their statistics, so I am trying to give you some insight into the psyche of today's physician.
Please reread my sentence, "...we fear that the data may not accurately measure how we perform..." This is the crux, I believe, of docs' reluctance to see public outcome data - certainly NOT that we feel we shouldn't be "judged by society for our clinical expertise." And I acknowledge your statement that there has been good work to normalize for clinical severity, etc. I have been pondering how best to get my point across, and the only way I know is to give you a couple examples of statistics from my own practice - although not surgery, just as important to the patient's outcome.
One of the quality assurance statistics my group of 5 collected was "correlation between pathologist diagnosis and outside consultation diagnosis". This encompassed all cases diagnosed by us and then sent out to a different center for confirmation. It so happened that another member of our group had the highest number of cases sent out for consultation, and the highest concordance between his/her diagnosis and the outside one. I, on the other hand, had the lowest number of cases sent out, but the lowest concordance. The other group members were somewhere in the middle on both counts. So did that mean he was the best pathologist and I was the worst? Certainly the public might conclude so from that statistic. What one should ask, however, is, how many of his/her cases were sent out by his request (versus the patient's or treating doctor's) versus mine? Turns out he/she sent out more cases himself, so as to be sure of his diagnosis, whereas my approach was to only send out the "harder" ones in which I was truly unsure of the correct diagnosis - thus ensuring a higher discrepancy rate. (Keep in mind it costs $$ to send out cases)
My next example is a little "quiz", if you will play along with me.
Another statistic we collected was "correlation between frozen section diagnosis and permanent section diagnosis". As you know, a frozen section is a piece of tissue sent to the pathologist during surgery, with a question like, "is this malignant or not?" We are expected to make a diagnosis within 20 minutes, and the frozen section is always of technically inferior quality to a permanent section due to compromises necessary for a quick preparation.
So - pretend (God forbid) your daughter has an ovarian mass which, by clinical and imaging exams, may be either benign or malignant. The surgeon tells her he will send a frozen section during surgery and, if it's malignant, take out both ovaries and her uterus (thus, no children ever again). This is a real life clinical scenario. You, being aware of the importance of the pathologist in this procedure, consult our publicly reported statistics. You discover that Pathologist A has an 85% correlation between his frozen and permanents, and Pathologist B has an 100% correlation. Assuming you could choose the pathologist, which one would you choose and why?
Now remember, you are just the public here, so you may NOT ask anyone at your hospital about this, least of all any pathologist! I am not trying to play with you here, simply illustrate what could be a realistic scenario should public outcome reporting ensue. I will comment further when you reply.
Further comment from anon 8:40, regarding Paul's 9:06 challenge. Where I DO fault my profession is in not being more proactive in proposing and refining outcome statistics and testing them out internally within hospitals/surgery centers, and then voluntarily reporting them publicly before they are inevitably mandated to. The ACS is missing a golden opportunity to (gasp) apply its criteria to individual surgeons and see how they work or what must be honed - before some government clerk or Congress comes up with its own statistics which will certainly be invalid. We need to get our heads out of the sand.
ReplyDeleteBut consider the law of unintended consequences - given that surgical outcome statistics of the surgeon and hospital are usually intertwined (due to differences in equipment, staff competence level, etc.), could public reporting of surgeons' outcome statistics result in their refusal to operate at, say, an inner city hospital with inferior equipment? What would this do to our health care system? I can't even think of the many other unintended consequences that should be considered.
Perhaps if hospitals like BIDMC are in possession of data that suggest a particular surgeon is sub-par or whose statistics fall below some minimum threshold of competence, he or she could be denied practice privileges at the hospital. If all hospitals in a region could agree on such metrics, that might be one way to weed out the less competent practitioners. Conversely, if a surgeon has especially good statistics that suggest his skills are well above average, he could be designated an "All Star" which would identify him to the public as among the best in the field.
ReplyDeleteIf there are doctors whose peers would not go to them for treatment and would not refer a family member to them, I would love to know that so I could avoid them too. At the same time, let's clearly identify the stars. As a patient, I would also like to be able to infer that the very fact that a given doc has practice privileges at a hospital, especially an academic medical center, means the patient can expect a level of competence that the hospital's own staff would find acceptable if they needed treatment themselves. In the investment world, we call that eating our own cooking.
Thanks for all your comments. I tend to be a bit simplistic, but maybe because it is because I am relatively new to the field.
ReplyDeleteHere is the highly simplified dilemma for a patient: I want to pick a great surgeon or at least a great surgical hospital. Without published information about comparative performance, I will rely on anecdotal information, which has little or no foundation in analysis or rigor.
So, I want to have some data that would be helpful. But the doctors and hospitals tell me that there is no accurate methodology. Oh wait, the ACS tells me it has a great methodology that it shares with its members, but apparently I am not smart/worthy/ready enough to be granted access to it.
How disrespectful would this feel to you?
Folks, we are trying to help the public understand why academic medical centers are important parts of our future health care system -- and why they are worthy of public investment and other special treatment. As Anon 2:03 states, if the people in the field don't start to self-report, they will find themselves subject to mandates over which they have little control.
"The American College of Surgeons, the preeminent surgical organization in the country, has developed a superb program to measure the relative quality of surgical outcomes in hospital programs. It is called NSQIP (National Surgical Quality Improvement Program)...."
ReplyDeleteTechnically, the ACS did not develop NSQIP. It was the VA.