Monday, April 09, 2012

How to get better at harming people less

Every day, a 727 jetliner crashes and kills all the people on board.  Not really.  But every day in America, the same number of people in American hospitals lose their lives because of preventable errors.  They don’t die from their disease.  They are killed because of hospital acquired infections, medication errors, procedural errors, or other problems that reflect the poor design of how work is done and care is delivered.

Imagine what we as a society would do if three 727s crashed three days in a row.  We would shut down the airports and totally revamp our way of delivering passengers.   But, the 100,000 people a year killed in hospitals are essentially ignored, and hospitals remain one of the major public health hazards in our country. 

There are a lot of reasons for this, but I’d like to suggest that one reason is a terrible burden that is put upon doctors during their training and throughout their careers.  They are told that they cannot and should not make mistakes.  It is hard to imagine another profession in which people are told they cannot make mistakes.  Indeed, in most professions, you are taught to recognize and acknowledge your mistakes and learn from them.  The best run corporations actually make a science of studying their mistakes.  They even go further and study what we usually call near-misses (but perhaps  should be called “near-hits.” ) Near-misses are very valuable in the learning process because they often indicate underlying systemic problems in how work is done.

If you are trained to be perfect, it is very hard to improve.  David Rosen, an accomplished educator and administrator, and many years ago the Director of Education Services at Jobs for Youth in Boston, watched my TEDx talk and was prompted to say:

Your concern that doctors’ need to be perfect, to make no mistakes, leads me to a (just coined) adage: "perfect is an enemy of good...and also better."

He goes further and discusses one of his former employees:

When I first worked with Mary she was a perfectionist.  It was driving her crazy.  As we were creating the JFY competency-based GED curriculum, one day she said "Does everything I write in this curriculum have to be excellent, or are there some things that just need to meet a more basic standard?  I don't think I can do everything perfectly. I need to know from you, as my supervisor, which things need to be excellent and which just need to pass." 

Mary taught me -- as a wet-behind-the ears supervisor -- everything I know about good supervision.

Let’s now take this a step further and consider the role of punishment in such an environment.  At my former hospital, we had a case in which an orthopaedic surgeon mistakenly operated on the wrong leg of a patient. It was quite clear that the hospital’s “time-out” protocol, which was designed to avoid precisely this kind of error, had not been properly carried out. In the weeks following this disclosure, a number of people asked me if we intended to punish the surgeon in charge of the case, as well as others in the OR who had not adhered to that procedure.  Some were surprised by my answer, which was, “No.”

I felt that those involved had been punished enough by the searing experience of the event.  They were devastated by their error and by the realization that they had participated in an event that unnecessarily hurt a patient.  Further, the surgeon immediately reported it to his chief and to me and took all appropriate actions to disclose and apologize to the patient.  He also participated openly and honestly in the case review.

My reaction was supported by one of our trustees, who likewise responded, “God has already taken care of the punishment.”  He pointed out that it would be hard to imagine a punishment greater than the self-imposed distress that the surgeon already felt.  He had taken a professional oath to do no harm, and here he had, in fact, done harm.  But another trustee said that it just didn’t feel right that this highly trained physician, “who should have known better,” would not be punished.  “Wouldn’t someone in another field be disciplined for an equivalent error?” he asked.

This was a healthy debate for us to have, but a wise comment by a colleague made me realize that I was over-emphasizing the wrong point (i.e., the doctor’s sense of regret) and not clearly enunciating the full reason for my conclusion.  The head of our faculty practice put it better than I had, “If our goal is to reduce the likelihood of this kind of error in the future, the probability of achieving that is much greater if these staff members are not punished than if they are.”

I think he was exactly right, and this was the heart of the logic shared by our chiefs of service during their review of the case.  Punishment in this situation was more likely to contribute to a culture of hiding errors rather than admitting them.  And it was only by nurturing a culture in which people freely disclose errors that the hospital as a whole could focus on the human and systemic determinants of those errors.


Jesse said...

I believe the issue of whether the surgeon should be punished or not, comes down to this question: What was the culture in the OR at the time of this event? If it was standard to do the time-out completely and properly yet this surgeon purposefully prevented that from happening,he should have been punished, because he made a conscious decision to bypass a key safety tool. If the time-out at that point in time was not fully entrenched in the hospital culture, and therefore was not done (i.e. he "passes" the substitution test-other surgeons in similar circumstances would have done the same thing), then he should not be punished as it was a system error.

In a just culture, there does need to be some accountability, which should be reserved for behavioral issues and actively thwarting established accepted standard procedures, but not for human error.

Far be it form me to suggest what you might write in your blog posts, however, I would hope as speak out for safety, your readers would clearly get the message that there should be a professional accountability for providers of health care to use the tools that are provided and made easily available to them. Asking them to be perfect would be asking them to remember to do everything necessary absolutely correctly every time, without any reminder tools.However, using a tool, such as a checklist that is readily available and commonly used is a matter of professionalism, and they should be held accountable to. I believe this is a very important distinction that people need to understand, and that an incredibly important part of safety is intolerance for poor behavior.

By emphasizing the "lack of external punishment" here, I think it does not reinforce that message if indeed this was not a systematic cultural failure.

Paul Levy said...

Good points all, Jesse. In my book, I talk about many of the items you mention, as they are very important. Unfortunately, in a blog post, it is hard to cover all the scenarios. Ultimately, it comes down to a definition of "just culture" and the need for everyone in the organization to know the rules and expectations.

In this case, there was a systemic breakdown. It is a really great case from which to learn, and we sent an email about it throughout the hospital.

Many thanks!

Anonymous said...

You are right on all accounts. But given that there are so few Paul O'Neills and Paul Levys around to publicly account for institutional behavior (most CEOs are third-paragraph apologists, after all), and the inherent Paleolithic pace of change in human behavior, I think that it is way past time for the coaxing notion of culture change.

How far has ‘nudging’, after all, changed medical education - where it all, really, begins? We can't wait for the old guard to retire, and a new guard to train – and daily demonstrate - how to test and account to patients for their performance. The story of perfectionism (which begins well before med school, by the way) assumes that there exists a near achievable ideal. But a real understanding of human health, from the evolutionary basis of physiology and microscopic predation, to the epigenetic and psychosocial complexities of health behavior, admits that we know much less about disease and human response than medical schools are willing to admit.

Perhaps, here is where the real conversation begins. Perhaps, in that stubborn gap between physician expertise and the unaccounted for empiricism of patient experience, is the truth of the unknown. The challenge in health care is not that we know so much, but that we really know so little, and refuse the humility of asking for help from other sciences, community expertise, and the very patients we serve.

So, how many 747s until that happens? We don't have to wait for culture change. Overnight, insurance companies, government policy, and visionary leadership can shift the canalized behavior of us all. (Hello CMS, hello ACO).

So, who will break this dam between knowledge and candor? It isn’t that we don’t know what to do. We just won’t do it. It will require: (1) a centralized, protected anonymous log of harm and near misses observed, experienced, and performed; (2) visual social penalties to the very highest (rather than next in line, middle or lowest) in the priesthood for poor reporting; and, (3) elevation of substantial contributions to institutional safety performance as requisite to all levels of physician and managerial promotion.

One of these would alter the flow of the Mississippi. Two would alter the tides. Three would be an entirely different universe of patient care.

For all the talk of disruption, isn't it time that we actually witnessed a little?

Kerry O'Connell said...

If medicine was truly transparent the story of your Surgeon's errors would be front page news and include his name. Then the rest of us could decide if we wish to employ this surgeon who fails to follow proper timeout protocol. You as a blogger and we as a society grant this surgeon and in fact almost all physicians "Reputational Immunity", thus guaranteeing that the Mississippi continues in the same old course.

Paul Levy said...

We posted the story for the world to see, and it was covered in the newspaper. You fall in to the trap, though, of blaming the surgeon, when what went wrong were a series of systemic problems. What happened was replicable: It could have happened to anyone. His name didn't matter.

What you might like to know is that the surgeon, in fact, disclosed to his other patients who had scheduled surgery with him that he had made this error, offering to refer them to other doctors if they would feel more comfortable. They chose to stay with him, in that he gave them a special reason to trust him.

Anonymous said...

I hate to say it, Kerry, but your type of comment is exactly why many health care providers oppose public reporting - for fear that the public will misunderstand it. (see Paul's subsequent post about the Joint Commission opposing it). You have jumped to a definitive conclusion about the cause of the error without knowing anywhere near the whole story - even though the CEO of the hospital at the time, with no ax to grind now, is telling you what really happened.

Almost all medical errors have a train of small errors involved which, when placed together like so many slices of Swiss cheese, culminate in a harm-causing error when the holes in the cheese (small errors) match up from front to back. Looking for someone to blame in this situation only guarantees the error will recur when the holes line up once again with a different set of cheese slices.

nonlocal MD

Paul Buchanan said...

I strongly believe that nobody would do anything purposefully. While they are out there, the vast majority of people WANT to do the right thing.

I recently lost a member of my family from Leukemia, however, as was highlighted in the original posting - we actually are close to confirming he did NOT die from the cancer.
There was a chain of events, all appearing to be hospital mistakes, that made it impossible for him to regain his strength and emerge. From a questionable drug, to procedures ( removing 3 vials of blood from a patient struggling from low hemoglobin ) to a pneumonia vaccine ( given to a patient with extremely depressed immune function!! ).

I feel that if at any time I were to pull a physician aside, a nurse aside and say
Given these circumstances, is this the right thing to do?
They would have all said - No, absolutely not.

What is it about our system that caused 3 life-threatening events to unfold upon a man who was doing incredibly well with his chemo, to dead within 2 weeks after visiting the 2 hospitals?

It's as though the system is rigged to induce failure.