Here's an email I sent to our staff today. Also picked up by the Globe. Please check out the site and let us know what you think. Now I can stop posting infection rates on this blog . . . .
Today we start a new experiment, a web site directed to the public called "The facts at BIDMC: We're putting ourselves under a microscope." You can find it on the external BIDMC web site at www.bidmc.harvard.edu/thefacts .
What's this all about? It is our belief that the public deserves timely and accurate information about the quality of care at hospitals. There are other web sites that provide some information; however, most of what is available is not current and is often based on administrative data like insurance claims, rather than on clinical data.
So, we decided to create our own. On this web site, you can see how we rate on certain "process metrics" – for example, how closely BIDMC is following recommended guidelines for treatment of heart attack and heart failure. You can also see how well we are doing in reducing harm to patients – such as our progress in eliminating central line infections. We also show how many times we have done certain kinds of procedures, like bariatric surgery, heart bypass surgery and others.
We show the latest numbers we have for all these metrics. Where national comparisons or benchmarks exist, we compare ourselves to them. Where national standards do not exist or where we think they are not adequate, we show our own goals and how we are reaching them. Where we are not doing as well as we would like (such as with hand hygiene), we show that too.
For each item we post, we try to explain how to interpret and use the numbers. Over time, we plan to add more categories of medical services.
As I noted, this is an experiment, so we also provide a page for reviews and comments. I recognize that this is a new experience for all of us – to have our work so starkly laid out and measured for all to see. I hope you all see this as a valuable tool that helps each of us do our jobs better every day. So please take a look and send us your thoughts.
Awright, Levy; now that's what I call stepping up to the plate! Not only that, but the information layout, in my opinion, is beautifully designed, with clear explanations of the data's significance and the relevant benchmarks. I especially like the part that says a higher score is better, or a lower one. Most of us need that!
ReplyDeleteThis step is nothing less than what I've come to expect of you since becoming a blog groupie. I challenge Mr. Baker and Mr. Dreyfus to match this action with something as bold from their side.
Out of curiosity, when did your staff start working on this? I ask because, in most hospitals, the bureaucracy would require months to get this going; I sense it has been much faster in your case. (No wonder you like your CIO!)
Congratulations on your new experiment! There are too few hospitals self-publishing their data, and I salute you with this new effort. Purely curious, how come there's no historical data for the CMS process measures? If it's due to the data not being available on the CMS site, I'd be happy to send you our copies.
ReplyDeletePaul -
ReplyDeleteVery impressive and kudos to you and the team for being brave and confident in showing everyone an honest and open look at the care people are receiving.
Definitely a leading edge move that differentiates BIDMC from the pack.
- Dave
This gets an A+ for effort and intention. However I think it is less useful than it appears to the lay public, and can actually be harmful in some cases.
ReplyDeleteSome of the quality measures are clearly beneficial (ie measuring blood oxygenation, aspirin for heart attacks). However, there is real doubt among clinicians whether some of these measures really do represent quality care (blood cultures in pneumonia), and even if they do, some of the rules are so poorly written, it is unclear what patients are appropriate to include (antibiotics in pneumonia).
Regarding blood cultures in pneumonia:
Journal Watch Emergency Medicine (from the publishers of New England Journal of Medicine) wrote: "... these findings confirm those from many other studies demonstrating that blood cultures are of extremely low yield in patients with pneumonia and that even positive cultures rarely alter therapy (JWEM Nov 24 2004; JWEM Apr 27 2005). We hope the CMS will back off on the 'quality guidelines' requiring blood cultures in patients with pneumonia, thereby allowing EDs to redirect the millions of dollars and enormous human and systems capital currently being wasted on blood cultures into patient care that might actually make a difference."
Regarding the 4 hour rule for antibiotics in pneumonia: The original research showed benefit in patients aged greater than 65 with pneumonia visible on xray. Yet the current rule includes all admitted patients who are diagnosed with pneumonia, even if it was not visible on the xray, or was an incidental finding that wasn't related to why they came to the ER in the first place! It would be easy to quickly get 100% compliance - just give strong antibiotics to every patient who comes in the door -- there's no penalty for promoting antibiotic resistance. Would you consider that "improved quality"? It would also be easy to improve compliance by taking every patient complaining of cough and fever (most of whom have a viral URI) and expediting their care ahead of those who are in agony from kidney stones, or broken bones. That would represent a significant decrease in the quality of care, but the numbers would look better. Which ER would you rather be seen in?
see http://infectious-diseases.jwatch.org/cgi/content/full/2006/823/1 for more commentary from Journal watch.
The willingness to be transparent about the hospital's strengths and weaknesses is a great concept -- but I think it's far from clear whether compliance with these measures truly represents the quality of care provided.
Promotion of incorrect "quality indicators" will divert resources from other areas that need them and I think there's an excellent chance that in the long run these will actually worsen care in some respects.
Paul--
ReplyDeleteThe incentive for this data analysis appears to stem from the federal "pay for performance" initiative, where hospitals will receive monetary bonuses for meeting higher standards.
Do you have any kind of program to pass such rewards to your departments and employees who help acheive these goals to bring in the bonuses?
ie- do you share the carrot, or just the stick? :-)
Anon 10:52 The measures you mention are required by federal authorities and the Joint Commission. Please direct your suggestions to them.
ReplyDeleteAnon 11:01 No they do not stem from pay for performance programs. There are not bonuses for these.
br We have been working on this for a few months. On the data, not a huge staff effort because we collect the numbers anyway. But getting it formatted and keeping it up to date and with sensible explanations took time and effort, some focus groups, and trial and error.
And thanks to the others for your thoughts!
Are you 'putting yourself under a microscope' or just braggin about your high scores ;)
ReplyDeleteYou would be stupid not to publicize such high scores.
http://www.qualitycheck.org/
ReplyDeleteHas all of this information already for years AND you can compare it side by side with other hospitals in town.
To anon 10:52; I sense you are a physician. Me too, so let's talk. One can always quibble with what measures of quality are most appropriate or accurate, and of course it's the nature of science in general and of our profession to do exactly that - that's how advances are made. But the point of putting these things out there in public is so that process of advancement can begin. Paul has chosen, as he noted, required process metrics which were an advance in themselves when they became required, because before, there was nothing at all.
ReplyDeleteI have problems with process metrics at all, because it's the outcomes that count, and dollars spent on those outcomes ("value") which will distinguish the best providers. But inasmuch as Paul's action will pressure other providers, the government, insurers, and everyone else to do something similar, this is truly a beginning. Let's support it and urge the feds and our "friends" JCAHO to better their required
measurements.
Paul
ReplyDeleteMy gut reaction to this move is that in five or ten years everyone will be doing this and those who bother to reflect on the process will credit BIDMC with being the early leader. While the measures can be criticized I am sure they will be tweaked over time to better capture the performance they attempt to measure. The most important element to transformational change is simply starting - and BIDMC has done it in a big, big way. Congratulations and thanks.
Paul – Excellent work. Well designed and user friendly. I especially like the data about infection rates, surgical outcomes, and the number of various surgeries performed.
ReplyDeleteI have a couple of thoughts and questions. First, how does the quality information provided compare with the information that your medical experts would most like to know about a hospital if they were the patient? Are you considering any initiatives over the near term with respect to price transparency? I think one potentially transformative strategy might be to move toward package pricing for a complete episode of care (including follow-up visits and physical therapy) similar to Geisinger's recently announced approach to cardiac bypass surgery. The package price could vary somewhat depending on who the surgeon is. Alternatively, there could be a separate price for the surgeon's fee and a package price for everything else.
Transparency is very rare.
ReplyDeleteAs an IT manager, I used to very clearly communicate what we were doing. When we screwed up, I told everyone why. When things went well, we celebrated. All in very short, punchy communications with some personality and, yes, a little bit of pride.
Because even when we screwed up, we could explain how we fixed it. And people understood how our jobs weren't all that different from theirs, and we un-became "the other" and "those geeks."
It also meant that we were forced to be responsible stewards.
People responded well, even when they didn't understand what we were talking about (and yes, I do use plain English). They loved that, when there was an outage, we said there was an outage, and why, and, importantly, WHAT WE WERE DOING TO MAKE SURE IT NEVER HAPPENED AGAIN.
So. This? This is the way things ought to be, Paul. There is no shame in being imperfect. There is only shame in trying to hide the fact.
Awesome.
As a pathology resident who routinely "looks under the microscope", and deals with QC issues every day in the anatomic/surgical pathology and clinical laboratories, I think this is a great idea!
ReplyDeleteI just wanted to share my appreciation for this great initiative.
ReplyDeleteTransparency and patient empowerment are important indicators of the quality of our services and demonstrate respect for our patients' right to make informed choices about their medical care.
I think it is remarkable that in our attempts to continually improve our services, we go beyond fulfilling external reporting requirements mandated by payers, accreditors, regulators and government agencies, to satisfy our patients' needs.
Great initiative, couple questions:
ReplyDelete1. What time period are these? Are they more current than what JCAHO/CMS have to offer via their websites?
2. Is your lay internet-savvy prospective patient going to care about the individual measures? Is there consideration for rolling up measures into aggregates, a la the CMS Appropriate Care Measure and HQA composites?
3. Can you picture the look on Mongan's face if you post PHS data, too? It's all on the HHS and JCAHO sites, so why not put it up here, too? It's worked for Progressive Auto...
4. You've done a lot of shooting across many bows...how much more ammo do you have?
Anon 8:03 - some of this data may be viewable on http://www.qualitycheck.org/ or other source sites like CMS, but don't forget that the data includes self-reported measures on bundle usage and the like; those CLABs measures are nowhere else to be found for example.
ReplyDeleteEven if it were all available elsewhere, it speaks volumes that the hospital stands behind the measures and presents them as their own instead of allowing this data to lurk hidden on low-traffic, un-navigable Web sites.
I like that you also included the data on the likliness to recommend. Do you have any special incentives in place for patients to complete satisfaction surveys? The problem we are having is getting people to actually complete the surveys. Do you offer strictly paper surveys or do you also offer online or phone options?
ReplyDeleteDear Barry,
ReplyDeleteWhat is price transparency in a market like this, the amount we get paid by each insurance company for each service? First of all, I am not allowed to disclose that. Secondly, why does it have value to consumers, who don't pay those costs?
Dear Anon 11:40,
1) Time periods are indicated on the site. Yes, several are much more current than other sites.
2) Time will tell. But I am not big on aggregate data or indices.
3) Those sites are full of old data, based on administrative figures as opposed to clinical data. Comparison among the hospitals based on them are therefore not very useful.
But just to clear up one point: Jim is a close and good colleague in the academic medical system, and I have tremendous respect for what he has done in many forums. This is not being done to be critical of him or anyone else or hospitals within PHS. We think it is of value to our patients and prospective patients. But, referring back to question #2, we'll see if we are right.
4) Unanswerable!
Dear Anon 1:55. We get good reponse on surveys, with no incentives. We are about to transfer from phone surveys to mail surveys.
Hi Paul,
ReplyDeleteIn response to your last response about moving from phone surveys to mail surveys:
What are your reasons for transferring from phone surveys to mail surveys? As a sociologist by training, I always thought that phone surveys were more effective (faster, you can call again, more detailed responses, more personal) than mail surveys (which I suppose are more cost-efficient?). Just curious...
Paul, As a BIDMC employee I truly appreciate your commitment to transparency and open dialogue. The hospital recently did an employee survey, can we expect the same transparency and open dialogue on that front as well? I hope to see the same attention paid to this survey as is paid to patient satisfaction surveys.
ReplyDeleteI am proud to be a BIDMC employee and accept that my contribution as a care provider is important to the total care we deliver and the results we now post. I hope you and the other leaders at the hospital will give serious attention to what employees told you in the survey and maybe post them as well.
Of course. We promised we would do that when we started the survey. Stay tuned.
ReplyDeleteAnon 8:07. We are changing survey companies. All indications are that written surveys actually have a better and more complete response rate.
ReplyDeleteI would challenge your comment that written surveys have a better and more complete response rate...maybe for middle to upper class English speakers but if you truly want to understand the experience of all BIDMC's patients, then you also have to survey those who don't speak and read English. Yes, surveys can be translated but health literacy is so much more than language--it's format, readability, even cultural factors. For several of the patient populations served by BIDMC, a written survey is just not a culturally familar or comfortable medium--for these groups, the phone survey conductd by bilingual surveyors would yield better feedback on the patient experience.
ReplyDeleteOf course we reviewed all that and switched firms to the one that has now become the more widely used one because it produces more complete results. I'll get back with specifics on your point after I talk to our folks and get details.
ReplyDeleteIn the meantime, it seems to me that your conclusion about what is more appropriate for different language groups make pretty broad assumptions about literacy and cultural differences itself. Please elaborate if you don't mind. What groups do you think cannot understand or will not respond to a written survey, and what support do you offer for those conclusions? I'd welcome that perspective. Thanks.
Hi Paul,
ReplyDeleteI am anon 8:07. I agree with anon 10:47 that health literacy is so much more than language. A written survey created with the general population in mind may ignore some of the intricacies necessary to produce accurate data from different cultural groups (and inadvertantly, exacerbate health disparities that many in the medical and public health community are trying to change).
For example, consider the Cambodian population. I'm not sure if BIDMC has many Cambodian patients, but Lowell, MA has the second largest Cambodian community in the country. Most adults in this population are refugees who escaped from the Khmer Rouge regime, and have lived in refugee camps for years. Many had very little education in their home country due to the disturbances of the time--therefore many cannot read nor write in their native language (making translated written surveys completely useless). For a more complete profile of the Lowell Cambodian community, the Cambodian Community Health 2010 project (a coalition led by the Lowell Community Health Center) did a survey of the community:
http://cch2010.info/Health2010Surveys.htm
Besides these written language issues, there are cultural factors to consider. It is not Cambodian custom to question doctors or ask questions (I found this out doing thesis research). "Appointments" and preventive health care are unfamiliar to most Cambodians. Therefore, faced with a written survey that asks about satisfaction with the care provided and what not, you cannot assume that Cambodians will answer it the same way as the general population. For more about Cambodian cultural factors that can influence their health care, see
http://www.ethnomed.org/ethnomed/cultures/cambodian/camb_cp.html#origin
There is also the issue of trust...I think it's easier to get reliable and honest data when patients are speaking to someone they trust (I am assuming it goes a long way when someone speaks the same language as you and understands your culture). I don't have any phone-interview data to back this point, but it seems true enough in the doctor-patient relationship.
These are just some things to think about for one ethnic group...I'm sure there are linguistic and cultural aspects that must be considered for every ethnic group. Basically, anon 10:47 does sum it up...health literacy is more than language, and written surveys (even translated ones) may miss the experiences of many patients.
Bravo! I have not seen a hospital publish information as detailed as this (and as meaningful to the patients) YET!
ReplyDeleteIn re: phone vs mail surveys: I think it would be hard to easily assert that phone surveys would overcome the kinds of issues that the poster above cites in re: the Cambodian community, or others like it. An English-speaking phone caller seems unlikely to get better results. The post above suggests to me that mail vs phone isn't the issue--but that BIDMC might consider some targeted surveys in some of the communities it serves that appear to be less likely to respond to whatever method of survey it is using. (e.g., if there is a 5% response rate among Cape Verdean immigrants and a 25% average, that should lead the BIDMC to design a targeted survey method to get the opinions of the low-responding group).
ReplyDeleteOf special importance: this kind of approach would be especially helpful in evaluating interpreter services, which in my experience from the provider side are woefully uneven in most hospitals.
Thaks, very helpful suggestions.
ReplyDeletePaul – Price transparency does mean your actual reimbursement rates from insurers. I know that providing them is illegal under current law, but I hope the law will eventually be changed. Transparency would be most useful for specific services like MRI's and other imaging. It would also be helpful if there were package pricing for episodes of surgical care like heart bypass surgery, organ transplants, chemotherapy, etc. It would be less practical for ER situations and when patients present complaining of pain or bleeding but the underlying cause needs to be determined.
ReplyDeleteFor an insured patient, it would probably be better to access this information from the insurer's website. If the data were readily available to referring doctors, they could more easily and consistently direct the patient to the most cost-effective providers (assuming quality and outcome data were also available).
As health insurance evolves into more high deductible plans or plans with tiered or differential co-pays, percentage of the cost co-pays (with a cap), and/or higher out of pocket (OOP) maximums, patients should become more cost conscious and price sensitive going forward.
Whether we are analyzing the status quo or thinking about possible changes in the way healthcare is delivered, financed and insured, we should always be sensitive to the following two economic principles: (1) incentives matter and (2) beware of unintended (and unwanted) consequences.
Hi Paul,
ReplyDeletePublishing your own data like this is not only the right thing to do, it's also excellent risk management given the state of public reporting.
As much as commercial sites and government sites that report on healthcare performance might improve their data over time, the vast majority (I'd guess 75%) of "meaningful" data (eg, current, locally relevant, conducive to learning) exists within providers and represents their own uniqueness.
One thing that has become sad about healthcare is that the ceaseless assault on it has left it, as an industry, quite insecure. In psychosocial terms, hospitals and even physicians have become what mental health professionals might call "field dependent," which means they affirm themselves by comparing themselves against others. While comparison is useful as a reference point, the fact that payers and commercial websites are using immature data to cajole people into making judgments of good and bad providers is forcing hospitals to care about how they are perceived--and sadly, perception becomes reality. This is a normal reaction to being "exposed", but trying to create a single set of agreed measures to publish and compare providers assumes that every provider is the same and that their priorities are the same.
By conforming to an externally defined performance environment, providers will have a hard time fostering a "self aware" organization. As an example, a major healthcare system I worked with used to rank and reward the top hospitals in the system based on their JCAHO grid score (which no longer exist). When the corporate risk manager was concerned that too many hospitals were leaning on their JCAHO survey result as affirmation that they are "high performing"--and ignoring real risks--he did an internal study. He found that most of the hospitals that scored highest in their JCAHO surveys also had the highest rates of adverse incidents and claims in the system. In other words, the hospitals identified by the external world to be the best were, in reality, also the most dangerous. This makes the current focus on showing the world numbers that depict good and bad providers dangerous on many levels. It can have damaging unintended consequences by creating an alternate view of reality.
So being proactive in self-publishing your own performance data is responsible and realistic given the risks and limitations of externally-driven public reporting.
Many thanks, Mike.
ReplyDelete