My Australian friend Marie Bismark and colleagues published an article a couple of years ago about the role of boards in clinical governance in over 80 health service boards in the state of Victoria. There was one remarkably revealing quote about the 233 board members who answered the survey:
Almost every respondent believed the overall quality of care their service delivered was as good as, or better than, the typical Victorian health service.
In an earlier article, Ashish Jha and Arnold Epstein found similar results:
When asked about their current level of performance, respondents from 66 percent of U.S. hospitals rated their institution’s performance on the Joint Commission core measures or HQA measures as better or much better than that of the typical U.S. hospital. Only 1 percent reported that their institution’s performance was worse or much worse than the typical hospital. Among the low-performing hospitals, no respondent reported that their performance was worse or much worse than that of the typical U.S. hospital, while 58 percent reported their performance to be better or much better.
|Hospital Board Chairs’ Perceptions Of Hospital Performance, Compared With A Typical U.S. Hospital, On The Joint Commission Core Measures, 2007–08|
Marie and her co-authors suggest:
A recognised cause of these so-called "Lake Wobegon effects" named after Garrison Keillor's fictional community in which all the women are strong, all the men are good looking, all the children are above average, is unavailability or underuse of reliable information on peer performance.
I'd go a step further. A couple of months ago, I recalled
a wonderful story from Amitai Ziv, the director of MSR, the Israel Center for Medical Simulation at Sheba Medical Center on the outskirts of Tel Aviv. He relates how Israeli fighter pilots would return from their missions and debrief how things went. The self-reported reviews of performance were very good. Then, the air force installed recording devices on the planes, and it turns out that the actual performance was not nearly as good as had previously been reported. The conclusion: It's not that people are poorly intentioned or attempt to mislead about their performance. It's just that we tend to think we are doing better than we actually are.
I think the issue is not the unavailability of reliable information on peer performance. I think the issue is a failure, in the first instance, to even measure one's own performance and to share that with one's own team. After all, the issue is not so much benchmarking. That only goes so far. As I've often said, there is no virture in benchmarking to a substandard norm.
So, the first step is to accurately collect one's own data and make it transparent to your own team. It is that transparency--more than benchmarking--that will establish the creative tension in an organization that will drive people to meet their own stated standard of clinical excellence. A smart board does not have to apply pressure on its staff by drawing comparisons with others. Rather, they take governance steps to demand transparency, so that the deep sense of purpose that is inherent in the clinical staff is employed to stimulate the team to do better on their own.