A short New York Times article yesterday by Stephen R. Gray, “Flight School,” got me to thinking. Gray describes the work environment for fighter pilots who land their jets on aircraft carriers:
A carrier pilot must ... learn to accept criticism of his or her performance, from both peers and overseers. The landing signal officer on an aircraft carrier administers a public debriefing and critique of every landing, and a grade is assigned to every pass the pilot makes at the deck. These grades ad the pilot’s performance are displayed publicly for all to praise or ridicule. The psychological pressure of this culture is the whetstone that successful carrier pilots use to sharpen their skills -- and the grinder that drives some from the profession.
Now, in medicine, we don’t have anything like this. Yes, while in training, interns and residents receive real-time reviews of their work (often in front of their colleagues) from their more senior residents and from attending physicians. For attending physicians, we hold mortality and morbidity (M&M) conferences when something goes wrong in patients’ care. But, we do not generally conduct peer reviews of doctors’ performance once they are certified as full-fledged physicians.
Our Chief of Neurology, Clif Saper, originated a thoughtful practice along these lines. The doctors in his department do randomly assigned reviews of the case notes of their colleagues, with an eye towards deciding if the process and diagnosis and treatment seem warranted by the facts of the case. Those reviews, blinded by reviewer, are then shared with the attending physician. The idea is a good one, to help all of the doctors do a better job by allowing an objective review of real cases. It is specifically designed not to be threatening, though, and the results are not made public, even within the department.
We also had a similar, more limited experience in our GI department, after it was learned that the speed of removal of an endoscope during a colonoscopy can make a dramatic difference in the likelihood of detecting pre-cancerous polyps. (See this post for more information.) Each doctor in the GI divisions was given a summary of the department’s performance on this metric, along with a confidential summary of his or her performance. Without any public release of data, everyone’s performance soon rose to the desired level.
But do these efforts go far enough?
The difficulty of doing a carrier pilot type of review in a hospital is that no place can afford to have dozens of senior physicians standing around judging the performance of dozens of attending physicians, all day long and all night long. In contrast, one landing signal officer on an aircraft carrier sees every pilot’s pass and can apply a grade to it.
But there are metrics of performance that can be applied to surgical and procedural cases. While not perfect, they could send warning signals of the need for improvement -- or perhaps, at a minimum, create a healthy kind of competition among doctors. For example, you could use unanticipated returns to the OR or incidence of surgical site infections to evaluate surgeons. As mentioned in an earlier post, too, the American College of Surgeons already collects data regarding risk-adjusted actual versus expected outcomes in certain surgical specialties.
For proceduralists, like GI doctors, you might measure the number of adverse events, like perforated colons. These data are already collected by every hospital. So imagine if these kinds of metrics were presented every week to the doctors within each group, with names mentioned.
I know it would be more difficult to assign a good grade to a doctor for treatment of a given patient. After all, some results are not known for days or months, well after the patient has left the hospital. So focusing on problems rather than successes might give an unfortunately negative view of performance, but at least it would help assess each doctor's ability to avoid harming a patient.
I haven’t raised this issue at BIDMC (yet!). I am willing to guess what the reaction would be -- even in a place where transparency is embraced as fully as any hospital in the world. “No way!” would be the response, I think. First, people would say that there is no metric or set of metrics that would be accurate enough to give a full representation of a doctor’s performance. Then, people would say that if we share this data, it will distract people from the “real” issues in patient care and cause them to “teach to the test.” Others might say it would be insulting and a sign of disrespect, especially if the residents and other trainees were allowed to view it.
To which I would say, “You pick the metrics you want to share, the ones you think would be indicative of important aspects of performance. Don’t release them to the world, but use them only in your legally protected peer-review sessions. Tell your residents that you are doing this in front of them to demonstrate that learning and performance improvement never stops. Consider this as an experiment for six months, and see if it changes the nature of the discussions at your faculty meetings.”
Before I suggest that, though -- and please remember that I would only have authority to suggest it -- I'd like to hear from any of you out there. Do any of you do anything like this in your hospital or your physician group? I am not averse to pushing the envelop on this, but it would be great first to hear the experience of others.
(Photo credit: OK3)