A friend sent me the link to this great interview of Lucian Leape and Bob Wachter on PBS in 2005. It is illustrative to re-read it seven years later. One statement by caught Bob my eye:
I'm actually not a big fan of reporting. Reporting was one of the aviation analogies from around the time of the original IOM report. The Aviation Safety Reporting System is very robust and really very helpful. . . . [When] a near miss gets reported . . . useful action comes from those reports. They get 30 or 40 thousand reports a year, and it's been very, very helpful that way.
The health care analogy doesn't work very well in terms of reporting, and what we've seen with some of these state reporting systems is they get -- some of them have gotten 100,000 reports, and they're sitting mostly accumulating dust on shelves or in computer databases. In some ways this is part of the problem. There are so many errors, there are so many near misses in health care, that reporting them up to the state or to the federal government, nobody has really quite figured out how to take those reports and do something useful with them.
Where I think we have made some progress in reporting is that individual hospitals have something called incident reporting systems, and in good ones, my hospital has a good one where it's a computerized system when a nurse or a doctor sees something that went wrong, they report it through that system. And there is then a feedback loop. Action occurs.
This prompted my memory of a recent email from Steve Spear. I think he makes some great points:
And both the question and the answer apply beyond clinical settings.
The answer first depends on:
1- what is reported (frequent, small disruptions such as
momentary break in routine versus infrequent, but large
and consequential realized hazards)
2- when it is reported (immediately or on lag)
3- how is is reported (information content, format)
4- by whom it is reported
5- to whom it is reported
6- for what purpose it is reported (to trigger problem
containment, investigation, indictment.
IF we start with the basic premise that for large, complex, dynamic systems
THAT there will always be gaps between intent and actual experience
THEN we need mechanisms to see problems and swarm them at the time and place they occur
IN ORDER TO contain them from infectious spread (metastasis)
AND IN ORDER TO investigate their cause while the conditions that caused the problem are still "hot."
In short, we want to answer the questions above:
1- frequent and small
2- immediate
3- detailed about symptom and associated causal conditions
4- the person immediately affected
5- someone designated to respond immediately (organizational antibody)
6- contain, investigate, solve to prevent recurrence.
This dynamic of "see a problem" "solve a problem" is what biological systems do to maintain homeostatic self regulation.
Complex technical systems also have high speed, nested feedback and control loops to maintain a combination of reliability and responsiveness.
So too, complex work systems must also have a similar dynamic.
On the other hand, if we answer the questions, above, as:
1- infrequent and at the point of crisis
2- on delay
3- poor detail and accuracy
4- by third party
5- to someone who cannot or does not act
6- catalog, report, retribute
...we have a system with TERRIBLE control properties that is sure to 'crash,' literally and figuratively.
I'm actually not a big fan of reporting. Reporting was one of the aviation analogies from around the time of the original IOM report. The Aviation Safety Reporting System is very robust and really very helpful. . . . [When] a near miss gets reported . . . useful action comes from those reports. They get 30 or 40 thousand reports a year, and it's been very, very helpful that way.
The health care analogy doesn't work very well in terms of reporting, and what we've seen with some of these state reporting systems is they get -- some of them have gotten 100,000 reports, and they're sitting mostly accumulating dust on shelves or in computer databases. In some ways this is part of the problem. There are so many errors, there are so many near misses in health care, that reporting them up to the state or to the federal government, nobody has really quite figured out how to take those reports and do something useful with them.
Where I think we have made some progress in reporting is that individual hospitals have something called incident reporting systems, and in good ones, my hospital has a good one where it's a computerized system when a nurse or a doctor sees something that went wrong, they report it through that system. And there is then a feedback loop. Action occurs.
This prompted my memory of a recent email from Steve Spear. I think he makes some great points:
A colleague asked, "Is event reporting good or bad?"
The answer is, "It depends."
The answer first depends on:
1- what is reported (frequent, small disruptions such as
momentary break in routine versus infrequent, but large
and consequential realized hazards)
2- when it is reported (immediately or on lag)
3- how is is reported (information content, format)
4- by whom it is reported
5- to whom it is reported
6- for what purpose it is reported (to trigger problem
containment, investigation, indictment.
IF we start with the basic premise that for large, complex, dynamic systems
THAT there will always be gaps between intent and actual experience
THEN we need mechanisms to see problems and swarm them at the time and place they occur
IN ORDER TO contain them from infectious spread (metastasis)
AND IN ORDER TO investigate their cause while the conditions that caused the problem are still "hot."
In short, we want to answer the questions above:
1- frequent and small
2- immediate
3- detailed about symptom and associated causal conditions
4- the person immediately affected
5- someone designated to respond immediately (organizational antibody)
6- contain, investigate, solve to prevent recurrence.
This dynamic of "see a problem" "solve a problem" is what biological systems do to maintain homeostatic self regulation.
Complex technical systems also have high speed, nested feedback and control loops to maintain a combination of reliability and responsiveness.
So too, complex work systems must also have a similar dynamic.
On the other hand, if we answer the questions, above, as:
1- infrequent and at the point of crisis
2- on delay
3- poor detail and accuracy
4- by third party
5- to someone who cannot or does not act
6- catalog, report, retribute
...we have a system with TERRIBLE control properties that is sure to 'crash,' literally and figuratively.
1 comment:
Reporting the facts, in the most real-time fashion, and to the ones effected is truly beneficial - even though it may not look so at the surface for some folks (who have triggered the action by personally driven action, seen too often in large organizations).
The improvement of the whole system, and most people involved will be possible.
Thanks Paul for sharing this story, despite the age of seven years it has not lost its relevance in today's life!
Post a Comment