Friday, July 13, 2007

At the heart of the matter

Liz Cooney, on White Coats Notes, offers a summary of a new article in the New England Journal of Medicine by Doctors Thomas Lee, David F. Torchiana, and James E. Lock. It is called "Is Zero the Ideal Death Rate?" Here are some excerpts from her report:

[Dr. Lee] is concerned that public reporting of mortality rates for individual cardiac surgeons carries unintended, perverse consequences. He fears that surgeons might hesitate to operate on high-risk patients if they are seeking a perfect performance record, he and two colleagues write in tomorrow's issue of the journal.

"If you are being ranked, you may walk away from a patient who's very sick, even though that patient may be at high risk for surgery but even higher risk with medicine" as treatment, he said in an interview. "When so few patients can swing things for you being ranked, we're worried about that effect on the decision-making process."

[The authors say that] reporting on cardiac surgery by institution makes sense, with individual reports available only to those hospitals. Massachusetts recently joined New York, New Jersey and Pennsylvania in publicly reporting death rates for individual cardiac surgeons.

Two elements make individual reports undesirable, they said. The first problem is that risk-adjustment methods intended to account for how sick a patient is do not include variables such as socioeconomic status. The second problem is the small sample size. If the average death rate after coronary artery bypass surgery is 2 percent, one or two deaths among the 200 operations a surgeon performs can make a large difference in that surgeon's ranking, the authors say. Lee said a better way to report performance would be the measures the federal government chose when it rated hospitals recently: better than expected, as expected, and worse than expected.

"I worry about having a patient with diabetes who's doing very poorly. They may have a 20 percent mortality rate with surgery but an 80 percent mortality rate without surgery," he said. "I don't want to have to beg surgeons to operate."

I am not quoting from the actual NEJM article, because Liz's summary is what members of the public are more likely to see. So I recognize that some of the subtleties in the article may not be fully presented. To my mind, it raises tons of questions.

First, is the premise correct, that doctors will stop taking high-degree-of-difficulty patients if their clinical results are made public? I am not sure how to test that statistically, but when I have raised the issue at BIDMC, the response was, "If you are a good enough surgeon to take those kind of cases, you will still take them. If you are not -- or if you are so afraid of your "numbers" -- you shouldn't be taking them anyway."

Second, if we can't make the results of individual doctors public, what basis is there for referring doctors and patients to choose among surgeons? We fall back on anecdotal or reputational methods -- the methods used today -- which have no statistically valid quantitative basis and are therefore subject to errors of a different type.

Third, a hospital-wide rate doesn't help me choose a surgeon. It helps me choose a hospital, for sure, but it doesn't tell me which surgeon in that hospital offers me the best record of success.

Fourth, if we do want to use hospital-wide rates, there is currently a system in place that moves along the path suggested by the authors. Back on April 6, I posted a column entitled Surgical Gag Order. Here's the pertinent excerpt:

The American College of Surgeons, the preeminent surgical organization in the country, has developed a superb program to measure the relative quality of surgical outcomes in hospital programs. It is called NSQIP (National Surgical Quality Improvement Program) and is described in this Congressional testimony by F. Dean Griffen, MD, FACS, Chair of the ACS Patient Safety and Professional Liability Committee.

What makes this program so rigorous and thoughtful is that it is a "prospective, peer-controlled, validated database to quantify 30-day risk-adjusted surgical outcomes, allowing valid comparison of outcomes among the hospitals now in the program." In English, what this means is that it produces an accurate calculation of a hospital's expected versus actual surgical outcomes. So, if your hospital has an index of "1" for, say vascular surgery, it means that you are getting results that would be expected for your particular mix of patients. If your NSQIP number is greater or less than "1", it means you are doing worse or better than expected, respectively.

I am inferring from Liz's article that this is the kind of ranking recommended by the authors in the NEJM article. Here's the catch. The American College of Surgeons will not permit the results to be made public.

So here's our Catch-22: No reporting method is statistically good enough to be made public. But if a method is statistically good enough, we won't allow it to be made public.

The medical profession simply has to get better at this issue. If they don't trust the public to understand these numbers, how about just giving them to referring primary care doctors? Certainly, they can trust their colleagues in medicine to have enough judgment to use them wisely and correctly.

We hear a lot about insurance companies wanting to support higher quality care. When is an insurance company going to demand that the hospitals in its network provide these data to referring doctors in its network? How about this for an idea? If a hospital doesn't choose to provide the data, it can still stay in the network, but the patient's co-pay would be increased by a factor of ten if he or she chooses that hospital.

I have been in many industries before arriving in health care, but I am hard-pressed to remember one that is so intent on preserving the "priesthood" of the profession. The medical community is expert at many things, but particularly at raising stumbling blocks and objections to methods to inform the public and be held accountable. Meanwhile, they are quick to engage in protectionist behavior to keep others out of their field. The insurers, fearful of introducing products that require real-time clinical data from dominant providers in their network, stand by and are complicit.

And then they wonder why state legislatures pass laws about reporting and accountability.

26 comments:

Anonymous said...

Paul;

I respect you and enjoy your blog, but I believe you are shooting from the hip in your second to last paragraph. ("I have been in many other industries....") It is hard to have a reasoned dialogue on this issue when pejorative comments rule the day. We went through this on your surgical gag order post also; I suggest you review the comments from your physician readers.

This issue is not as simple as you present it. I strongly suggest that you make an effort in the next week to go around and talk to as many surgeons in your hospital as you can, and solicit their views on this subject - with an open mind (although unfortunately most of them probably already know your views) rather than by giving them a lecture. You may discover issues you didn't think about.

Anonymous said...

Thank you. I don't view that as a perjorative comment. Other industries that provide essential public services have gone through similar industry transitions. These include water, wastewater, electricity, natural gas, and telecommunications. As they went through that transition, the incumbents often raised objections against public disclosure of important service and quality metrics -- saying, the public will not understand, will be scared, or will be misled. My point is that their objections paled in comparison to those I have seen in this field.

Your point is that those in the field have reasonable objections to this kind of disclosure. Fair enough. Doctors Lee et al were making the same point.

But you then owe it to all of us to help solve the Catch-22. Give your own proposals.

Finally, I don't know where you get the idea that I give lectures to our surgeons.

By saying, "Unfortunately ... they already know your views", you imply that this would cause them to hold back when they disagree. You clearly don't know our surgeons if you think that to be the case!

(Or, if you happen to be one of our surgeons and somehow feel intimidated, please speak up.)

Anonymous said...

Paul,

Great post. Very provocative. I would like to offer my view from the perspective of a patient who has been through CABG, DES and several other surgical procedures and as an investor and securities analyst with several decades of experience in the money management business.

First, I can appreciate the surgeons' concerns about the adequacy of risk adjustment methodology combined with the issue of relatively small sample size. On the risk adjustment issue, I think the surgeons themselves should play a key role in developing the metrics that they think are most relevant and that they would be willing to live with and then publish the criteria. It might also be useful to develop between 3 and 5 risk tiers with benchmark mortality rates developed for each against which each surgeon's results could be compared. So, for example, a 50 year old athlete with bad genes who winds up needing a CABG would be in the lowest risk tier while the elderly diabetic with advanced heart disease would be in the fifth tier. It would also be useful to know how many procedures each surgeon performed in the relevant timeframe.

As in the money management business, it might also be helpful to disclose rolling 1, 3, 5, and possibly even 10 year performance metrics to give patients better insight into both the longer term record of each surgeon as well as advances in general surgical techniques.

Anonymous said...

Measuring outcomes in medical care is too complicated to be accurately reflected in a simplistic ratings formula. Pass/fail is simply inadedquate. In addition, risk adjustment is difficult and often inaccurate.

I yearn for someone on the business side/administrative side of medicine to say something like the following:

"We all are trying to measure quality in medicine. It is extremely difficult. Simply looking at outcomes ignores the context in which individual physicians practice. As much as we might not want to admit it, accurate measures will require more than simpistic data gathering. Perverse incentives are likely if we choose the such a model"

Chart reviews by an idependent physician board, taking current standards of care into account, and looking at individual case details and outcomes would be better. But that would take time and money, neither of which anyone seems to have in medicine these days.

Anonymous said...

Three quick comments/questions:

1. What about providing data on the anesthesiologists? Don't they have some effect on the surgical outcomes?

2. What do you think about the challenge hospitals have (or would have) if one or two of their surgeons are shown to have worse outcomes? How can hospitals address this? Do they (can they) take educational action with these doctors? Or eliminate their privileges at the hospital?

3. If Anonymous doesn't have anything substantive to add, and only wants to criticize he/she shouldn't be anonymous.

Thanks - Mike

Anonymous said...

I joined the healthcare industry three years ago, after a 25 year career in agribusiness managemetnt. I work with both physician and hospital pricing and reimbursement.
I have read your blog for six months, and I understand you to be attempting to teach a new paradigm. Your point today is well taken. Unfortunately, gainfully employed participants at all levels in every manner of healthcare enterprise, seem to be painfully innocent of their vulnerabilty. The story of the frog in the slowly heated water, though factually untrue, is apropos. An enormous wave of change has been built up by years and perhaps decades, of successfully resisting logic, common sense, and doing the right thing for the long term at a bearable expense in the short term. The behavior of the coal mining, steel making, and automobile mfg unions are useful examples. I'm not anywhere near the steering wheel, but I think that's what it looks like where the rubber meets the road.

Anonymous said...

OK, data first, then the "who said what" later.

I read the article and excerpted some statements directly from it:
Ref: NEJM 357:111-113, July 12, 2007) Forgive the length, but I think it's important to get the "nuances" you left out.

(Speaking of balloon valvuloplasty):
"Any cardiologist knows how to minimize the complication rate: use a smaller balloon. But doing so also reduces the hemodynamic benefit and the time before another procedure is needed. Bigger balloons produce better relief of obstruction but carry a slightly higher risk of complications. Thus, two measures are needed to assess quality : one measure of effectiveness and one of safety. The interventionalists who deliver the best care with maximal benefit may well have complication rates in the middle of the bell-shaped curve, not at the low end."

(Speaking to the issue of not taking the hi-risk cases):

"For example, patients with a ruptured papillary muscle after acute myocardial infarction have a high risk of death with surgery but almost no hope of survival with medical therapy. A decision not to operate in such a case might help a surgeon preserve a low death rate but seal the patient's fate. Cardiologists who perform angioplasty in patients with acute myocardial infarction and shock improve long-term survival by 67%2 but also raise their procedural death rate."

Finally, their concluding statements:

"There may be no compromise that would satisfy everyone, but a middle-ground approach may be necessary to improve care so that we move the entire bell-shaped curve, not just track movement within it. Such an approach might combine group-level measurement for public reporting and pay-for-performance contracts with confidential use of ranked outcomes at an individual-provider level for regional quality-improvement "collaboratives."

We are fortunate to live in an age in which the pursuit of perfection is becoming the culture of medicine. Nevertheless, measures that drive providers toward apparently perfect performance should be handled with care."

I think their proposal is reasonable. In your April post I proposed a period of internal hospital validation of the ACS system prior to public reporting, which should be the ultimate goal.

Also, as you know, several states including your own now do report individual outcome rates for some procedures, and NY's cardiac surgery outcome reporting has been credited with improving their outcomes.

So the issue you presented as black and white (Lee et al don't want to report individual rates and you do) is considerably more gray (Lee et al have a specific proposal, but present several examples of realistic caveats).

Pejorative? Yes. Docs don't go around publicly saying all hospital CEO's are know-nothing suits, so don't call us a "priesthood" who are "expert.......particularly at raising stumbling blocks and objections....." Also, surgeons are not intimidated, just dismissive of such administrations. Lack of disagreement can just mean being ignored. No, I don't work at your place.

I also wonder if working at an academic medical center does produce a different attitude among surgeons than in private practice. I think the latter feel more exposed and may not have the support structure (read: ancillary excellence) of an academic setting.

Alexis said...

Tangentially -

Weeks and weeks ago, you posted about organic/local eggs, which brought about a long exchange with many people. A medical student collegaue of mine e-mailed me a link to this site this morning, and I immediately thought of you. I don't think it's specific enough to satisfy your business needs questions/concerns, but it seemed like a good place to start, and they have examples of hospitals who have gone the local/organic route that you might find useful.

Dr. Rob said...

I think this is nicely balanced. Yes, there is a need to be reporting outcomes (people want to know the quality of care they are getting), but if you make too much of numbers you end up working for the numbers.

Another good example of this is when the government puts specific tests as the measure of a school. The teachers tend to be pressured to teach for the test and not for anything else. There needs to be multiple factors, not a single one, that are taken into account whenever you are evaluating something like the quality of a hospital, doctor, school, etc.

Anonymous said...

This is certainly a fascinating issue, and I can see why you are very passionate about this topic. Transparency is certainly wonderful and effective at stimulating positive change in many realms, but can it be applied broadly in all areas without careful execution? It seems that reporting of individual "performance" has the potential to both introduce perverse incentives to avoid the sickest of patients and trouble physicians with respect to the ways in which the quantitative information is interpreted.

To me, it is not a surprise if doctors are worried that the general public would not use the information effectively: what does it mean if one surgeon has a 0.5% mortality rate as opposed to another having a 0.75% mortality rate? The second surgeon has 50% more mortalities than the first, but the absolute difference is small. Or what does it mean for two parents to have a 1 in 4 chance of their child having a genetic disorder? One physician told me that many parents will reply, "Oh that's ok, we were only planning on having three children anyways." At least anecdotally, it's hard to have much confidence in the general public's understanding of statistics and quantitative performance measures.

This is not to say that this information should not be available or made public. The information should, however, be provided for use in effective contexts, such as you suggested: having the information readily available to primary care doctors so that they can help patients select specialists for referral. There is more to a physician's performance than can be measured quantitatively. How do you measure trustworthiness, moral and ethical conduct, and respect? Furthermore, are there quantitative measures available for both improvement in quality of life as well as mortality rate?

Anonymous said...

Many thanks, Apollo and others, for your thoughtful comments.

Let me introduce a variant on this theme. Right now, doctors throughout a city like Boston would likely come to a pretty strong consensus as to who are the best two or three or four heart surgeons in the city. Ditto for many other specialists.

What is the basis for their conclusions? It is clearly not based on a statistically valid set of metrics. It is based on personal relationships, accessibility, and a sense that Dr. Jones or Dr. Smith is technically superb -- sometimes based on one's own refreed patients, sometimes based on stories from other doctors.

How can we accept this state of affairs as a way to measure performance and make referrals, and yet object to creation of statistically valid metrics?

I seem to have more faith than most that the public can understand the technical aspects of real data, given appropriate accompanying description.

But, if you do not agree on that point, can't you at least agree with Apollo and me that primary care doctors and other referring physicians should have access to it to make better informed judgement on behalf of their patients?

Anonymous said...

I am anon 12:41. No objection to your 2:53 proposal. I think hospitals should also have it and use it in the privileging process to pressure their outlying docs to better performance. (This is especially relevant since docs practicing in the same hospital with the same ancillary staff can be more easily compared than across different hospitals.)
I read recently on the Cleveland Clinic website that each physician there is on a one year contract, renewed yearly only after an extensive performance review; there is no tenure.
See, we did find something we can agree on after all. Just have to keep talking, and listening.

Anonymous said...

:)

Anonymous said...

Paul;

Speaking of rankings, I notice that the U.S. News and World Report rankings of America's Best Hospitals are out. Mass General and Brigham and Women's are on the 18 hospital honor roll, but BIDMC is not.

I have read recently that some 80 colleges are opting out of this magazine's college rankings, claiming the criteria used are inaccurate. Can you comment on the methodology for the hospital rankings and how it may have affected your hospital? (I didn't go through all 173 to see where BIDMC ranks).

Anonymous said...

We mainly spend our time trying to explain the ranking methodology to our doctors and trustees! Here's the short version.

We didn't make the honor roll, but we are ranked in the Top 50 hospitals in 10 clinical specialty categories. According to the magazine, BIDMC is among a group of 173 hospitals nationally –- about three percent of the nation’s 5,462 hospitals –- ranked in at least one of 16 specialties.

BIDMC ranked in the Top 50 in the following 10 specialties:
1) Geriatrics, 10th
2) Endocrinology,12th (listed for the first time in partnership with Joslin Clinic)
3) Digestive disorders, 14th
4) Respiratory care, 24th
5) Cancer care, 26th
6) Gynecology, 31st
7) Kidney diseases, 33rd
8) Otolaryngology, 35th
9) Cardiac care, 47th
10) Urology, 48th

While the methodology of these rankings is always a topic of conversation, it is nice to be recognized. My wife, a musician, often says "Reviews don't matter, but it is nice to get a good one."

I don't see us opting out, but you are unlikely to see "getting a higher ranking" in our goals and objectives for any given year!

FYI, other Boston hospitals had the following rankings:

1) MGH ranked in 14 specialties and placed 5th nationally behind Johns Hopkins Hospital, the Mayo Clinic, UCLA Medical Center and the Cleveland Clinic on the “Honor Roll.”
2) Brigham and Women's ranked in 13 specialties and placed 10th on the Honor Roll.
3) Dana-Farber, 5th in cancer.
4) Mass. Eye and Ear, 4th in eye. and 4th in ear, nose and throat.
5) New England Baptist, 17th in orthopedics.
6) Spaulding, 7th in rehab.
7) McLean, 3rd in psychiatry.
8) Lahey, 25th in urology.

Anonymous said...

Just one more comment to anon 12:41 (which I hestitate to do now that we are agreeing!),

I intentionally did not post the details of the NEJM article because most people do not have access to it. They would only have seen the Globe's blog. So, I was making my comments based on what the "person in the street" would know about the article. I made that very clear when I said: "I am not quoting from the actual NEJM article, because Liz's summary is what members of the public are more likely to see."

By the way, wouldn't it be great if NEJM did not require a payment so any member of the public could read this kind of article upon publication?

Anonymous said...

Yes, I was aware of your statement; however, I felt that Liz' summary did not do justice to the article. Unfortunately, this happens with almost all media summaries of NEJM articles (which seem to be the most quoted), which is why the public thinks the medical profession goes back and forth on evidence about various things. I subscribe to the NEJM and often get a totally different impression from the media summary I read in the newspaper, than from reading the whole article in the journal.
The devil is in the details, as they say, which can be critical in medicine.

It is for this same reason that I believe Apollo and I generally agree that the statistics should not be available to the public; the media would "summarize" them in their non-diligent way, and give people the wrong impression.

Anonymous said...

Paul;

I forgot to comment on your last statement about the NEJM being free. I have commented in the past that it's difficult to engage the general public in a debate about important issues like organ transplant ethics, etc. when the profession's own debates about these issues are hidden from the public. It doesn't help either the profession's reputation nor does it result in consensus viewpoints on extremely important issues. In addition, the NEJM is hostage to the media's interpretation of the journal and people like you and others blogging about it in the public realm.

I don't see why the NEJM couldn't make their editorial content free to the public online. I understand that their leadership has some Harvard connections. I am just a nobody subscriber, but with your stature in Boston and Harvard relationships, perhaps a letter from you regarding this issue would have some effect.....

Anonymous said...

You think I still have stature and relationships after writing this blog?

:)

Anonymous said...

Well, considering I just discovered Dr. Lee is associate editor of NEJM, you may have a point. Hopefully he doesn't read your blog.

Anonymous said...

Just wanted to say thanks for your blog, it is wonderful! As a nurse who has been around for a long time (since 1978), medicine has always been a 'secret' society. It has only been recently that patients are allowed to have access to their records let alone the data concerning an MD or a hospital. Medicine I believe was probably the initial profession that utilized acronyms/abbreviations, why to keep things from non-medical people hidden. This is going way back but there actually used to be an acceptable standard acronym/abbreviation of {PIA} or {RPIA} that was actually written in patient's records. I think you can guess what that stood for, many people don't know what the 'R' stands for, it is for ROYAL. Now that would never occur and that is a good thing as it should not have been used in the first place, but it was the norm years ago. It takes a while for people to adjust to type of disclosure that is now occurring and some people can never accept it, but there will not be a choice any longer to continue to outdated thought of keeping things within a private circle of members.

Christian Sinclair said...

Great article in this week's JAMA about the biases inherent in using mortality as a measure of quality.

Mortality as a Measure of Quality

Implications for Palliative and End-of-Life Care

Robert G. Holloway, MD, MPH; Timothy E. Quill, MD

JAMA. 2007;298:802-804.

Holloway and Quill do a great job of explaining the misunderstanding of mortality rate = deaths that should have prevented (i.e death as a medical failure) versus understanding that many things can affect a hospital mortality rate.

I know this is an old post, but for those really interested in this issue, this a must read.
Brief Excerpt Here (on Pallimed.org)

Paul, I would be interested in a future post if you would not mind giving your take on palliative care in the hospital. From a CEO level, how do you perceive palliative care? Is it about quality, mission, realism of illness, cost-savings, all of the above?

Obviously as a palliative care doctor, I am biased on how important I think it is. I have noticed some of one post ( a letter from a family) mentioned palliative care, but others have tangentially mentioned hospice.

Thanks, and keep up the good work.

Christian Sinclair, MD
Pallimed

Anonymous said...

That link to JAMA didn't work. can you please try again and also provide an excerpt here for those who do not have subscriptions?

I don't view the mortality index as biased. It's just a metric that includes all kinds of deaths, as you mention. As long as people recognize that, it is a good measure -- especially when risk-adjusted, as is IHI's number.

Happy to cover palliative care at BIDMC sometime. It is a very important program.

Anonymous said...

Also -- and I am not saying you are doing this -- if every time someone proposes an index or metric and every time people in medicine say it's not good enough, all you are doing is inviting a lay group of legislators to impose one on the profession. At some point, the profession has to say "this one is close enough."

Christian Sinclair said...

Here are the native links: (sorry the HTML coding did not work)

Link to JAMA article:

http://jama.ama-assn.org/cgi/content/extract/298/7/802

Link to excerpt:
http://www.pallimed.org/2007/08/mortality-quality-whole-brain-radiation.html

Christian

Christian Sinclair said...

I agree with your point about index/metrics. Being skeptical and critical can be paralyzing and have its limits. But if there were at least an easy way when publishing them to have a summary that also highlights the limitations of a metric, so people can make more insightful judgments. Much like published research usually has a discussion section that defines what the study is unable to assess.

Another problem with always being skeptical is that if you continually tweak a metric to improve it you greatly limit comparisons to past studies of the same metric. Therefore wasting a lot of time!