Friday, April 20, 2007

Seven minutes is a lifetime


In medicine, a "miss" is not good. It means that your diagnostic test has missed a potential problem. The case in point today is when the doctor misses a polyp, or adenoma, during your regular colonoscopy and you find yourself with colon cancer a few months later. According to one article I have read, several published studies have reported colonoscopy adenoma miss rates ranging from 6% to 27%.

This happened in our place to one of our most experienced GI specialists in late 2005. A very sad story for his patient, who showed up with colon cancer several months after a routine screening found no polyps.

Just a few months later, our folks were at a GI conference and noticed a paper abstract that suggested that the amount of time spent withdrawing a colonoscoscope would be directly correlated with the likelihood of seeing polyps on the way out. The suggestion was made that doctors should spend at least 7 minutes withdrawing the scope.

Even though this was just an abstract of a paper that would not be published in full until much later that year, it became the subject of one of our faculty meetings in April of 2006. Based on that discussion, our staff set out to change their practice. Improvements ensued, based on each individual's good intentions, but compliance with the seven minute standard then reached a plateau that just wasn't good enough.

So, the group then held another faculty meeting in January of 2007 at which they decided to give regular reports to each faculty member about his or her individual performance, compared with the anonymous values for the other doctors in the group. This led to still further improvement.

The chart above shows the results for our GI folks. Obvious good progress, with over 90% compliance. And, by the way, you only "pass" if you spend at least 7 minutes withdrawing the scope, so 6 minutes, 59 seconds doesn't count.

I offer this as an example of (1) enhanced attention to a problem after a bad clinical result; (2) an aspect of academic medicine, in that people are likely to attend conferences and notice new research results; (3) a thoughtful response even before the final data were published; and (4) the importance of providing individual data, even to experienced physicians, about their own performance relative to others and to a standard.

An update on April 24 from Naama in the Department of Medicine. We just finished analyzing the Mar. '07 data on colonoscopy withdrawal time: Compliance rate (W.T. => 7 min.) is up from 63% (Feb. 06') to 98% (Mar. 07')! We thank the entire GI faculty and the outstanding nursing staff for the wonderful work you have been doing to improve patient care. This project will go on ... with the hope that we reach and sustain a 100% compliance rate.

5 comments:

  1. This is an excellent example of the application of evidence-based medicine. It is not limited to academic centers, either. Most hospitals worth their salt are requiring each clinical department to monitor several quality-based indicators such as the one you describe. In my opinion, this is where the evolution of public outcome-based statistics you referred to in a previous post should begin - within the hospital, discussed and refined by the clinicians themselves (and individual data is very powerful, as your example proved); and then perhaps used for renewal of hospital privileges. After that process becomes embedded in medical culture, I believe physicians will not be so leery of publicly reported outcome statistics.

    ReplyDelete
  2. Now surely the next step is to see if this has improved your 'miss' rate. If it was an already rare event this will be harder. If this audit already in progress?

    ReplyDelete
  3. Yes, it is in progress.

    ReplyDelete
  4. When I first heard of that paper I thought what is cause and what is effect? Of course if "nothing" is found, less time is going to be spent. If what is looked at is complicated and marginal it will take more time to evaluate.

    But the 7 minute time is completely arbitrary. I wouldn't be able to tell what I was looking at if I spent 7 hours, or 7 days.

    The "real" data that is needed is how the rate of false negatives changes with withdrawal rate. That is something very difficult to measure, but with the data set that you have, with a change in withdrawal time for individual clinicians, you should be able to correlate withdrawal time and rate of positives. If there were false negatives before the increase in withdrawal times, that should be apparent in increased positives with longer times.

    But it isn't mean or median, or average withdrawal time that matters, it is will a longer withdrawal time find stuff that a shorter one would have missed.

    What your data will probably find is that some clinicians miss more than others do, irrespective of how much time they spend.

    I suspect that a better statistic to encourage would be the true positive rate for each clinician. Presumably stuff can only be found if it is there, and if the demographics of patients are similar, so should be the rates. A clinician with a low rate is missing some, but why might not be as simple as time spent looking.

    ReplyDelete
  5. I just received material from our local hospital (sent to physicians on staff only) that they are using a database called CareScience to provide individual physicians on the staff with the following data on their own patients:
    severity adjusted mortality
    morbidity (complications)
    length of stay
    Here's the kicker: physicians with adverse comparisons to the top 15th percentile in the national database (emcompassing 24 million patients and 2600 hospitals) greater than 2 standard deviations from the mean will be selected for peer review at the dept/section level.
    You're slowly getting your wish, Paul. Public reporting is sure to ensue eventually.

    ReplyDelete