Monday, September 15, 2008

Carrier pilots do it

A short New York Times article yesterday by Stephen R. Gray, “Flight School,” got me to thinking. Gray describes the work environment for fighter pilots who land their jets on aircraft carriers:

A carrier pilot must ... learn to accept criticism of his or her performance, from both peers and overseers. The landing signal officer on an aircraft carrier administers a public debriefing and critique of every landing, and a grade is assigned to every pass the pilot makes at the deck. These grades ad the pilot’s performance are displayed publicly for all to praise or ridicule. The psychological pressure of this culture is the whetstone that successful carrier pilots use to sharpen their skills -- and the grinder that drives some from the profession.

Now, in medicine, we don’t have anything like this. Yes, while in training, interns and residents receive real-time reviews of their work (often in front of their colleagues) from their more senior residents and from attending physicians. For attending physicians, we hold mortality and morbidity (M&M) conferences when something goes wrong in patients’ care. But, we do not generally conduct peer reviews of doctors’ performance once they are certified as full-fledged physicians.

Our Chief of Neurology, Clif Saper, originated a thoughtful practice along these lines. The doctors in his department do randomly assigned reviews of the case notes of their colleagues, with an eye towards deciding if the process and diagnosis and treatment seem warranted by the facts of the case. Those reviews, blinded by reviewer, are then shared with the attending physician. The idea is a good one, to help all of the doctors do a better job by allowing an objective review of real cases. It is specifically designed not to be threatening, though, and the results are not made public, even within the department.

We also had a similar, more limited experience in our GI department, after it was learned that the speed of removal of an endoscope during a colonoscopy can make a dramatic difference in the likelihood of detecting pre-cancerous polyps. (See this post for more information.) Each doctor in the GI divisions was given a summary of the department’s performance on this metric, along with a confidential summary of his or her performance. Without any public release of data, everyone’s performance soon rose to the desired level.

But do these efforts go far enough?

The difficulty of doing a carrier pilot type of review in a hospital is that no place can afford to have dozens of senior physicians standing around judging the performance of dozens of attending physicians, all day long and all night long. In contrast, one landing signal officer on an aircraft carrier sees every pilot’s pass and can apply a grade to it.

But there are metrics of performance that can be applied to surgical and procedural cases. While not perfect, they could send warning signals of the need for improvement -- or perhaps, at a minimum, create a healthy kind of competition among doctors. For example, you could use unanticipated returns to the OR or incidence of surgical site infections to evaluate surgeons. As mentioned in an earlier post, too, the American College of Surgeons already collects data regarding risk-adjusted actual versus expected outcomes in certain surgical specialties.

For proceduralists, like GI doctors, you might measure the number of adverse events, like perforated colons. These data are already collected by every hospital. So imagine if these kinds of metrics were presented every week to the doctors within each group, with names mentioned.

I know it would be more difficult to assign a good grade to a doctor for treatment of a given patient. After all, some results are not known for days or months, well after the patient has left the hospital. So focusing on problems rather than successes might give an unfortunately negative view of performance, but at least it would help assess each doctor's ability to avoid harming a patient.

I haven’t raised this issue at BIDMC (yet!). I am willing to guess what the reaction would be -- even in a place where transparency is embraced as fully as any hospital in the world. “No way!” would be the response, I think. First, people would say that there is no metric or set of metrics that would be accurate enough to give a full representation of a doctor’s performance. Then, people would say that if we share this data, it will distract people from the “real” issues in patient care and cause them to “teach to the test.” Others might say it would be insulting and a sign of disrespect, especially if the residents and other trainees were allowed to view it.

To which I would say, “You pick the metrics you want to share, the ones you think would be indicative of important aspects of performance. Don’t release them to the world, but use them only in your legally protected peer-review sessions. Tell your residents that you are doing this in front of them to demonstrate that learning and performance improvement never stops. Consider this as an experiment for six months, and see if it changes the nature of the discussions at your faculty meetings.”

Before I suggest that, though -- and please remember that I would only have authority to suggest it -- I'd like to hear from any of you out there. Do any of you do anything like this in your hospital or your physician group? I am not averse to pushing the envelop on this, but it would be great first to hear the experience of others.

(Photo credit: OK3)

12 comments:

  1. Hasn't your proposed "system" been in effect for a few years now, or are you just catching up?

    ReplyDelete
  2. Sorry, I am confused by your question. I don't know of any place where it has been in effect.

    ReplyDelete
  3. Why is there so much resistance to letting patients see how their doctors stack up against their peers? Is it really better that we have to rely on the nurses for this information?

    ReplyDelete
  4. A failed carrier landing as a result of pilot error has the potential of killing the pilot and crew if the aircraft is not a single seater. A failed surgery as a result of the surgeons error will never kill the surgeon or the "crew".

    Perhaps pilots see the value of such regular (every landing) and rigorous assessment as something that protects them and their crew.

    There are six elements to the ultimate rating given each landing. Primary to the LSO's rating is safety-not where did the pilot land or catch wire, but rather how the pilot got the plane on deck. All LSO's are rated (pilots) so they have earned the right to offer an opinion on another pilot.

    ReplyDelete
  5. Many ER doctors are used to seeing our patient satifaction scores which include a ranking among our peers.

    ReplyDelete
  6. I always practice the advice "ask a good doctor to find a good doctor." In my experience that has worked out well. But how do doctors form these opinions? There must be some way they do. Sadly, you only get into the "system" of good doctors if you already have a good one.

    ReplyDelete
  7. The concept of peer review for physicians has been around for some time now. I believe it is a mandatory requirement by Mass. board of reg. for medicine & as of this year it has been made a requirement by the joint commission. Hopsitals or physician organizations have some leverage in deciding the metrics by which the physician performance gets measured. It is not quite the concept you mentioned with the pilots, but perhaps the existing peer review system can be built on.

    ReplyDelete
  8. Metrics? A joke. If only physician "compliance" could be linked to metrics, let alone compliance. However, I applaud your efforts, Mr. Levy.

    ReplyDelete
  9. I agree with anon 9:57. Although now retired and not "up" on the latest JC requirements, it was my distinct impression that physician-specific performance metrics are a required element - at least they were in our hospital. Our group of pathologists monitored such metrics as accuracy of frozen sections, percentage of "critical" (e.g. unexpected and/or life threatening, such as cancer in a uterus removed for benign reasons) results called to the attending physician in a timely manner, etc., etc. Certain types of cases were also reviewed monthly, such as all prostate biopsies. The results were presented by name within the group, but only the aggregated data was presented to the medical executive committee. We did have one consistent outlier who was eventually fired by the group (after too many years, in my opinion).
    So any doctor who looks askance at these metrics is behind the times, in my opinion. They should be part and parcel of every departmental meeting and submitted to the med exec for final review and monitoring.

    nonlocal

    ReplyDelete
  10. As a senior resident at a top-notch residency program, I would love to get a report on how well I am caring for patients. And I think many of my peers would value the same feedback. As it stands currently, our evaluations are pretty bland and fail to illuminate any real deficits in our training. And I suspect that this is the case for most student-resident-attending evaluations. In addition to learning how well I worked with others, I would like to get data on how well my patients are doing. Are my CHF patients discharged with the right medications? What's the average A1C of my outpatient panel? etc...

    Of note, when giving me face-to-face feedback, I had one CCU attending discuss with me his evaluation scheme--he evaluates residents based on how well the patients do under the care of that resident. So not only did I learn where I did well clinically, but I also learned from where I could have done better. This feedback was extremely valuable and constructive--something we all hope to get and to give with our evaluations.

    ReplyDelete
  11. I've worked on several projects attempting to come up with these metrics. For most places, the answer has to do with what their organizational philosophy is, and what their overall goals are. If you just look at something like RVU's, you aren't accurately capturing the physicians who are working on harder cases. Likewise, if you just use HCC's, you may be dealing with a whole other set of statistics, such as who the coder was for that day.

    It's important that physicians understand that it's not second guessing their medical expertise. It's amazing to me that while my work is measured hour by hour (healthcare IT consultant), that physicians often do not have the same strict controls on their hours.

    It's vital to get the feedback of physicians, and often especially the disgruntled ones, to understand the atmosphere of your organization, and to see it in the context of overall vision and strategy. Patient care isn't black and white -- every patient is different -- which is one of the reasons that this is so hard.

    But then again, I've spent well over 30 hours of my life in various organizations just even trying to define what a new patient is!

    ReplyDelete
  12. jamie
    30 hours?? How old are you?

    ReplyDelete