Tuesday, December 23, 2014

Hey Doc, please go away!

Aaron Carroll, over at The Incidental Economist, summarizes a study suggesting that patients do better when cardiologists are away at academic meetings.  The gist:

High risk patients admitted with heart failure during meetings had a 30-day mortality rate of 17.5%, compared to 24.8% when more cardiologists were there. Cardiac arrest 30-day mortality was 59% during meetings and 69.4% at other times. 

Why is this?

There are a number of ways to interpret this. Maybe the best cardiologists were the ones who stayed home. Maybe with fewer cardiologists available, fewer invasive procedures get done, and that leads to better outcomes. Maybe they tell more low-risk patients to wait when fewer cardiologists are available, which gets the higher risk patients more attention and better outcomes. Maybe it’s something else.

I favor the second explanation and am reminded of the excellent judgment of my PCP back in 2007 when I was asked by the touring company to take a stress test before a two-week long kayaking trip to Patagonia.

She says, "No. I refuse to order a stress test for you."

"Huh?" I reply intelligently.

"Here's the deal," she says. "If I order the stress test, our especially attentive (knowing who you are) cardiologist will note some odd peculiarity about your heartbeat. He will then feel the need, because you are president of the hospital, to do a diagnostic catheterization. Then, there will be some kind of complication during the catheterization, and you will end up being harmed by the experience."

" I will not authorize a stress test."

5 comments:

  1. This was not only a brilliant idea for a study, but a terrifying result. I would love to see it replicated; if so it should be disseminated way faster than 17 years.......

    ReplyDelete
  2. There's a chapter in Super Freakinomics that revealed that there were fewer deaths during the time the Boston doctors went on strike.

    ReplyDelete
  3. Maybe….

    But note this from the blog post:

    "When I first looked at this, I assumed that they were concerned that patients would fare worse during the meetings. I thought they were worried with all the cardiologists off at meetings, patients might have worse outcomes.

    The opposite happened."

    In fact, this was the a priori hypothesis (from the paper):

    "We hypothesized that mortality would be higher and treatment utilization lower during cardiology meeting dates."

    Therefore, the investigators’ pre-test probability that mortality would be lower during meetings was low. This means this finding is far from definitive. In fact, it is most likely a false positive result. This is a retrospective observational study using a database collected for other reasons. We should be very suspicious of any results – especially those that conflict with what we suspected. Also, the effect sizes (7.3% absolute increase in mortality in high-risk heart failure and 10.3% absolute increase in mortality in high risk cardiac arrest) are HUGE. Is that believable? Most RCTs of heart failure have much longer duration of follow-up because acute mortality is much lower than seen here (one in five died? WTH?)

    To give some context, a meta-analysis for ACE inhibitors in hear failure involving >12,000 patients found an absolute mortality reduction of 4%! Finally, these “significant results” were only found in a few subgroups. There were no differences in mortality rates among high-risk patients with heart attacks or any of the low-risk patients.

    These multiple analyses also raise concerns about a false positive rate due to multiple comparisons. Remember, a p<0.05 means that “given that there is truly no difference between the two groups, we would expect to see the observed results less than 5% of the time.” The more inferential tests we do, the more likely we are to see a “significant” result. One approach to mitigating this risk is to adjust for these multiple comparisons. There are very boring statistical theses written on the various techniques to do this and which should be employed. I didn’t see that any was employed here.

    To give a ridiculous analogy, if we do pregnancy tests on enough men, we will get a positive result. That doesn’t mean that a man is pregnant.

    The same day this paper came out, there were hundreds of other observational studies showing a significant association between some risk factor and an outcome. The majority will not hold up to clinical trial study. And those that do not show an association are less likely to be published. Those that show predicted results are less likely to be promoted by mainstream media. This one had multiple factors leading to its promotion as a “golly gee” article. I would be willing to bet that it is a false positive result.

    ReplyDelete
  4. Excellent observations, Jim. Thanks!

    ReplyDelete
  5. A friend of my comments: "I've been to those meetings only twice. They tend to attract either: young awed cardiologists just finished training, academics presenting papers, "showboat cardiologists" who do risky procedures and want to brag, or very old cardiologists. This is exactly the group that does the worst patient care. Those of us who actually have the most busy practices and have the best outcomes don't go to those meetings, which explains the results. I only go to very focused smaller meetings like echocardiography conferences that I have to go to because the regulators track my Echo education hours.

    ReplyDelete