@ashishkjha Ashish Jha is easy to distinguish from many health care policy people in that he is "an advocate for the notion that an ounce of data is worth a thousand pounds of opinion." Unlike yours truly, he also has access to tons of data, so when he speaks, it is worth listening.
In a recent post on The Health Care Blog, Jha draws the following conclusion:
The debate around the readmissions measure has come to the forefront because of the CMS Hospital Readmission Reduction Program, which penalizes hospitals for “greater than expected” readmission rates. It has raised the question — does a hospital’s 30-day readmission rate measure the “quality of care” it provides? Over the last three years, the evidence has come in, and to my read, it is unequivocal. By most standards, the readmissions metric fails as a quality measure.
[I]f one measure of quality is external validity – being at least somewhat correlated with the gold standard (mortality rates) — how does the readmission measure do? In a paper published recently in JAMA, we see that readmission rates don’t do so well at all. Readmission rates are un-correlated with mortality rates. In fact, for one of the three conditions, the readmission rate seems to go the wrong way: the best hospitals for heart failure (i.e. those with the lowest mortality rates) have readmission rates that are actually higher. Not perfect. Readmissions seem to have little external validity as a quality measure. Readmissions are, however, correlated with two things: how sick your patients are, and how poor your patients are. We now have good data that the Hospital Readmission Reduction Program disproportionately penalizes big academic teaching hospitals (that care for the sickest patients) and safety-net hospitals (that care for the poorest).
But does the program help at all? Here's where Ashish goes anecdotal on us (but at least he admits it!):
So, given its poor test characteristics, can we justify using the current hospital readmissions measure to grade hospitals on quality? I don’t think we can. However, here’s where my own ideas have evolved. ... [T]he 30-day readmission measure may be a good way to promote accountability in healthcare.
In conversations with colleagues and friends, the readmissions penalty program seems to have gotten some hospitals to think outside of their four walls. Hospital leadership has started to rethink the role of the hospital. Hospitals are building relationships with community-based organizations. Some are creating follow-up clinics while others are calling all the patients who are discharged to make sure they are doing OK at home.
And the personalized summary:
The readmissions program seems to be, for some hospitals, having a positive effect. Will it pay off? Will we see a real, sustained change in the way they provide care to patients after they are discharged? I hope so. But remember – some of the best hospitals in America have the highest readmission rates, almost surely because they care for sicker, poorer patients. In the current business model, they are doing things right – taking good care of the patient while the patient is in the hospital. It’s fine to ask these hospitals to change their business model and to become accountable for what happens to their patients after they are discharged. But, let’s not call them bad hospitals or suggest that they are providing poor quality care. There is no evidence that they are.
How refreshing to hear from an honest analyst, someone who distinguishes between conclusions based on evidence, hypotheses based on anecdotes, and hopes based on societal ethical standards! The only thing missing from this article, in my view, is the "so what?" question. What should we actually do?
I think the answer comes from transparency. Just post, for the world to see, the readmission rates of all hospitals by clinical specialty and let admininstrators and doctors compare their performance to others. Even without financial penalties, the inherent competitiveness of people in this field will cause them to evaluate their work and try to do better, consistent with underlying standards of quality. Is CMS wants to provide a financial incentive, give a small bonus to hospitals that voluntarily post such results for each attending physician in real time, not months later. Then, you'll see changes in practice patterns!
But this approach is not likely to be considered, much less adopted. Federal and state policy is designed by other people. Look at this comment by another health care policy person:
Stuart Altman, a professor of national health policy at Brandeis University [and chair of the Massachusetts Health Policy Commission Board], said he gets questions from hospital chief executives and chief financial officers asking "why are we getting penalized when we take care of the patient?"
"I tell them, 'you are big, rich and powerful, and you have the ability to resolve the problem and you will be part of the solution whether you like it or not.' "
"There are appropriate readmissions, such as related to different ailments or an unforeseen health event unrelated to the first admission," Altman said. "Hospitals are not penalized in those situations.
"However, there also are non-appropriate readmissions that can be benchmarked and compared with peers and the community."
In a recent post on The Health Care Blog, Jha draws the following conclusion:
The debate around the readmissions measure has come to the forefront because of the CMS Hospital Readmission Reduction Program, which penalizes hospitals for “greater than expected” readmission rates. It has raised the question — does a hospital’s 30-day readmission rate measure the “quality of care” it provides? Over the last three years, the evidence has come in, and to my read, it is unequivocal. By most standards, the readmissions metric fails as a quality measure.
[I]f one measure of quality is external validity – being at least somewhat correlated with the gold standard (mortality rates) — how does the readmission measure do? In a paper published recently in JAMA, we see that readmission rates don’t do so well at all. Readmission rates are un-correlated with mortality rates. In fact, for one of the three conditions, the readmission rate seems to go the wrong way: the best hospitals for heart failure (i.e. those with the lowest mortality rates) have readmission rates that are actually higher. Not perfect. Readmissions seem to have little external validity as a quality measure. Readmissions are, however, correlated with two things: how sick your patients are, and how poor your patients are. We now have good data that the Hospital Readmission Reduction Program disproportionately penalizes big academic teaching hospitals (that care for the sickest patients) and safety-net hospitals (that care for the poorest).
But does the program help at all? Here's where Ashish goes anecdotal on us (but at least he admits it!):
So, given its poor test characteristics, can we justify using the current hospital readmissions measure to grade hospitals on quality? I don’t think we can. However, here’s where my own ideas have evolved. ... [T]he 30-day readmission measure may be a good way to promote accountability in healthcare.
In conversations with colleagues and friends, the readmissions penalty program seems to have gotten some hospitals to think outside of their four walls. Hospital leadership has started to rethink the role of the hospital. Hospitals are building relationships with community-based organizations. Some are creating follow-up clinics while others are calling all the patients who are discharged to make sure they are doing OK at home.
And the personalized summary:
The readmissions program seems to be, for some hospitals, having a positive effect. Will it pay off? Will we see a real, sustained change in the way they provide care to patients after they are discharged? I hope so. But remember – some of the best hospitals in America have the highest readmission rates, almost surely because they care for sicker, poorer patients. In the current business model, they are doing things right – taking good care of the patient while the patient is in the hospital. It’s fine to ask these hospitals to change their business model and to become accountable for what happens to their patients after they are discharged. But, let’s not call them bad hospitals or suggest that they are providing poor quality care. There is no evidence that they are.
How refreshing to hear from an honest analyst, someone who distinguishes between conclusions based on evidence, hypotheses based on anecdotes, and hopes based on societal ethical standards! The only thing missing from this article, in my view, is the "so what?" question. What should we actually do?
I think the answer comes from transparency. Just post, for the world to see, the readmission rates of all hospitals by clinical specialty and let admininstrators and doctors compare their performance to others. Even without financial penalties, the inherent competitiveness of people in this field will cause them to evaluate their work and try to do better, consistent with underlying standards of quality. Is CMS wants to provide a financial incentive, give a small bonus to hospitals that voluntarily post such results for each attending physician in real time, not months later. Then, you'll see changes in practice patterns!
But this approach is not likely to be considered, much less adopted. Federal and state policy is designed by other people. Look at this comment by another health care policy person:
Stuart Altman, a professor of national health policy at Brandeis University [and chair of the Massachusetts Health Policy Commission Board], said he gets questions from hospital chief executives and chief financial officers asking "why are we getting penalized when we take care of the patient?"
"I tell them, 'you are big, rich and powerful, and you have the ability to resolve the problem and you will be part of the solution whether you like it or not.' "
"There are appropriate readmissions, such as related to different ailments or an unforeseen health event unrelated to the first admission," Altman said. "Hospitals are not penalized in those situations.
"However, there also are non-appropriate readmissions that can be benchmarked and compared with peers and the community."
5 comments:
I enthusiastically agree with Dr. Jha's personal conclusion. Readmission rate measurement may not serve the exact purpose for which it was intended, but inasmuch as it furthers coordination of care inside and outside hospital walls, it has to to be doing something positive which may lead to further innovations.
And, sadly, the only way it seems possible to get the attention of hospital executives is to hit them in the pocketbook. Once you have their attention, then who knows what heights may be reached.
It has been my observation with a number of these initiatives (HIT, Obamacare, etc.) that their primary purpose has been to overcome the decades-long inertia that has prevented health care from improving. One may rightly criticize the details but the important thing is that the ball is rolling and its course may be altered as we go.
nonlocal MD
Seems there ought to be a patient-population adjustment metric so the academic and county (poor patient) hospitals are recognized and rewarded for what they do, rather than punished.
After a radical prostatectomy that went wrong, at a Boston hospital, I
was readmitted twice. It was an awful experience made worse because I
was charged for each readmission. It was at a substantial physical,
emotional and economic cost to me. I still suffer from this
experience.
It is important not to conflate statistics and measurement on the one hand, with actual quality on the other. Very often, even most often, you cannot use the former to inform about the latter.
In the case of readmissions, the best insight into quality and possible improvement would not be a statistical test, but rather an in depth view (of 100 consecutive cases perhaps) in a certain hospital. A researcher could follow the individual case as an involved relative would, by seeing everything that happened in the hospital, in transition out of the hospital, and hanging out with the patient and seeing what happened day to day for 30 days afterward. Since the observer would be a health professional, he or she could see what could and should have been done vs. what was in fact done. This would be a quality study using the hospital as its own reference point.
The problem with end results - in this case a readmission - is that sometimes it happens because of a lapse of good care, sometimes it happens despite good care, and sometimes it doesn't happen even though care was poor. That's the problem with end results measurement. Process measurement is much more sensitive.
Who knows what the resultant statistics and cost results would be from this study? A hospital with serious intent on quality could use these observations to improve its care. Whether or not it would have results that improved readmissions, or positive financial impact would be another thing entirely. The hospital could improve its care and see readmissions not change all that much; the cost of improved quality could result in worse finances.
It's important not to conflate.
Professor Altman is wrong: All readmissions, related or not to the index admission, count against a hospital. The only ones removed are planned readmissions.
Post a Comment