Tuesday, February 17, 2009

A conundrum

Milt Freudenheim's story in yesterday's New York Times is about ranking physicians in a Zagat restaurant-style manner. Indeed, the venture is being run by Zagat. The final sentences caught my eye:

Arthur Caplan, director of the Center for Bioethics at the University of Pennsylvania, said he was skeptical about open forums evaluating doctors.

“There is no correlation between a doctor being an inept danger to the patient and his popularity,” Professor Caplan said. Reviewing doctors is “a recipe for disaster,” he said.

I think we can agree that this kind of ranking has very little substantive validity with regard to the relative ability of doctors to diagnose and treat patients. But where does that leave us? Should there be a more systematic appraisal of these skills? Should it be available to the public?

What happens now? Well, informal conversations form the basis for physician reputations.

I know of one primary care doctor, for example, who has an outstanding reputation among his patients but who is regarded much less highly by his peers. Who is correct?

The same is true among specialists. There are some surgeons and other specialists who are very highly regarded by referring doctors especially because they are good at making themselves available when requests come in to see patients. But, those referring doctors have no substantive basis for knowing whether the outcomes experienced by their favorite specialists are better than, equal to, or less good than others.

Should we care about this at all? For referring doctors reading this, what criteria do you use for making recommendations to patients? Are you satisfied with the information you have?

17 comments:

the man from Utz said...

One may discuss the premise and the method separately, I think. Having information available to referring doctors can be helpful if that information is gathered in a way that reflects the rigors of science. My personal fear is that "reviews" by treated patients will gather unscientific opinions and a lot of subjective and misleading opinion.

As a business person I know that one hears the discontent but never the gratitude for good work, and that the information tends to skew to the negative. I add to that my experience of watching other patients in health care and fear that the patient's reaction to treatment has a good deal more to with their fears and hopes than it has to do with the facts of the treatment. The danger I foresee is that very competent doctors who deal in difficult specialities would be disadvantaged in these ratings not by their lack of skill but by the distress caused by the condition they treat.

Anonymous said...

Excellent points. But do you think referring doctors, i.e., those also trained in medicine, would be astute enough to deal with that last point. Also, I could see things working the other way; i.e., patients with really difficult conditions might be expecially appreciative of a doctor who cured them.

But your general point about the inexactitude of consumer surveys is an important one.

That is why I am hoping that referring MDs respond to my last question. How do you know that the specialists to whom you habitually refer get good results, either relative to some accepted benchmark, or relative to other MDS, or relative to anyone else.

Lachlan Forrow, MD, FACP said...

This is hard, complicated, important, and very dangerous.

I’m reminded of when we had a neighbor who complained to my wife Susan about the care she received from her BIDMC PCP, especially at the level of human interaction. This PCP was one of my closest colleagues, someone I would have recommended without qualification to anyone, especially because of her/his personal warmth and compassion. I simply didn’t recognize the description. I never figured out how to tell what the “truth” was, short of wishing that I had a videotape of the encounter(s). Was my colleague simply not (or not consistently) as wonderful with patients as she was with me? Was this neighbor far more entitled/judgmental than I would have guessed? Was either or both just having a bad day during the visit? Some of both?

But I never had quite the same unqualified admiration for and confidence in my colleague. Had this patient’s comment tragically poisoned things? Or accurately corrected my perception?

I am no longer a PCP, but if we develop systems that make it easy for any unhappy patient to post for the world her/his unhappiness and name my name, that would be a serious threat to my morale, and while it might not make me actively avoid patients I thought might express their unhappiness (though it might) it would almost certainly have reduced my energy for actively seeking out “difficult patients” because I found the challenges and occasional rewards had satisfactions that outweighed the frustrations.

--Lachlan

Lachlan Forrow, MD
Director, Ethics and Palliative Care Programs
BIDMC

Anonymous said...

How would you have me select a Primary Care Physician (PCP) then?

Ask friends and family if they like theirs? That's the usual answer and is just a survey with a very small sample size, even more likely to mislead.

Ask a chimp to throw darts?

Ratings are coming: On the web nobody needs your permission or approval to set up a ratings system. If it seems unbiased, fair and rates the things people care about it will get traction.

Do you want to help steer the bus or get run over by it?

Anonymous said...

Consumer restaurant ratings tell you nothing about the quality of the kitchen staff and whether they wash their hands or avoid cross contamination of food. They simply rate the quality of the experience. The health department certifies the quality of the kitchen. I think we need both. Great clinical specialists shouldn't get a pass on customer service just because they're technically proficient.

Dr Patrick McGill said...

As a PCP, I can say it is a very unique relationship you develop with specialists to whom you refer. When I first started practice, I referred to everyone. Then over time I developed a sense of who saw patients the fastest, who treated them politely, who called me when there was a complication and who never questioned my judgement when I sent a patient for a consult. Can I say these specialists have lower infection rates, perforation rates, or better outcomes? Quite simply, no. I don't have that data. I have all of these other factors which are a part of the "Art of Medicine." Which is more important? I don't know.

The issue I have with posting subjective data (or objective data for that matter), is two-fold? First, where is the data coming from? I receive insurance forms daily questioning why I have not ordered an HgbA1c or liver panel or ordered an eye exam. Almost universally, I have done all of these "recommendations" but the claims data is not up to date or accurate. Secondly, we all know medicine is complicated. Patients are non-compliant, conditions change and patients have bad outcomes despite our best efforts.

I think the real fear with doctors is that "quality" data will have nothing to do with real quality but subjective data and ultimately further divide the doctor-patient relationship more than it is already.

Lachlan Forrow, MD, FACP said...

To Anonymous:

The way that I picked my PCP was to choose a group practice w. a good reputation and a philosophy of care I agreed with (Harvard Community Health Plan, which has evolved into Harvard Vanguard). I then looked at their PCP’s and chose one who had been with them for a substantial period of time. I did that because I believed that (a) they do a reasonably good job of screening staff candidates before they hire them; and (b) they then do a reasonably good job of evaluating the quality of their care. Someone who has been there a while has thus likely been very well vetted. For (b), I know that at least in the past HCHP was obsessive about including in their assessments of quality regular, systematic, scientifically-valid surveys of patients about all kinds of things, and those data were taken very, very seriously. I think that that is a far better way of having patient experiences drive health care quality than a Zagat-like or Amazon-like system, which has far less scientifically-valid data, with some of the dangers I alluded to in my first post above.

What’s missing in this, however, is objective data that Harvard Vanguard itself is as good as I hope it is. For that, I wish that there were systematic, scientifically-valid, patient surveys across all health care institutions with public reporting of results – in the Boston Globe, etc. In my own field (palliative care), I would love for there to be annual or bi-annual surveys of patients and family members across the Commonwealth about whether or not in the context of terminal illness their caregivers inquired about goals of care, addressed pain and other symptoms adequately, answered phone calls or other requests for help promptly, were considerate/compassionate, etc. etc. And if I knew that BIDMC results were going to be in the Globe each year, then that would be a very (very) helpful motivator to all of us here to live up to our own very high standards, even when we are tired. I think that this public accountability is, for example, a FAR more powerful motivator for most professionals than financial incentives are.

I think that patients/citizens across the Commonwealth should be demanding this kind of information. If anyone wants to help me do that in the area of end-of-life care, great!

--LF

PS: Another disclaimer: the whole approach described above is simply one that was developed and used transiently in Massachusetts through the Picker Institute, which was created by my longtime boss and mentor at BIDMC, Dr. Thomas Delbanco.

Anonymous said...

For many procedures, there are known rates of complications and failures. In quality of care discussions, we often hear 'we are within or below known rates of complications'. But, of course, there is variation - there are best performers and worst performers and the latter should be learning from the former. But how can they when such information is not transparent? Leaders are not demanding pursuit of such learning opportunities. Individual variation in performance is hidden until someone is seriously harmed and inquiries are made about the physicians performance. Why not take advantage of using the data to promote improvements before harm happens?

Anonymous said...

There seems to be a more customer satisfaction focus in many industries over the past few years and even more focus with the economic struggles we are going through. Many organizations are looking at customer service initiatives and training their workforce to provide unique costumer experience. Over the years I have been to many of these training and have seen remarkable changes in customer perception. In saying this I’m always a little put off by these initiatives because they seem to be common sense stuff. The over arching theme is, treat people nicely. Not rocket science here. So why is it that physicians, who have extensive training in the difficult area medicine, have a hard time embracing this concept. Bedside manner is being taught at many schools of medicine now. One could argue that this is part of the treatment plan. Think of how you would feel if you went into see a psychologist and they were rude or cold to you. Psychologists know that the relationship is part of the treatment. The outcomes can only be better if that relationship is fostered by the medical physician as well. If like, most psychologist (I am away there are good and bad psychologist), this was a giving in care, the consumer or referring PCP could then just look at outcomes rather than soft relationship issues with patients as a guide to choosing the best fit. Physicians should be trained, like other healthcare professions, to understand the therapeutic relationship as part of the entire treatment of the patient.

Anonymous said...

Great question! As others have pointed out, patients may need multiple metrics to select the right physician. Key decision criteria include medical specialty knowledge, an understanding of the implications of specialty treatment on the entire patient, good diagnostic skills, and a good treatment track record. Most of these are hard to measure because of all the variables involved. One can look at credentials, others' experience, their own experience, and speak with other physicians. That said all of these are unreliable data sources--because none of these individuals has watched the physician diagnose and treat patients first hand. In fact, a physician once said the best recommendation you can get is from a medical resident who has observed multiple doctors treating patients over a period of time in the same setting. That makes sense to me but how do you get to know the medical residents--once they are no longer your peers? As patients, we have two choices: assume that anyone with good credentials or a good reputation is probably good enough--or get information from multiple sources to increase confidence in our choices. One of the questions one needs to ask of quantitative data is are we measuring the right thing and how do we ensure consistency across the sample (which is not homogenous). Qualitative data is more valuable but requires rich detail about symptoms, behaviors, etc.--rather than opinions--and very large sample sizes.

Anonymous said...

Hello Paul,

I’m really interested to read more provider responses to your question, because it helps more of the lay folk among us understand the management problem in health care delivery – and how MD’s perceive it.

We talk about health care being like a team sport, though it's not always organized that way. The teammates are the PCPs and referring specialists, and also the patients. Unlike a sports team, it is common for providers of chronic care patients to literally not “practice” together – they are located in different towns, different practice settings and different specialties. As a non-clinician I do not pretend to have experienced the real challenges of delivering quality care in our system. But I do think it’s important to highlight that it is dangerous to single out individual providers as being “good” or not in a public report like the Zagat’s patient survey. At worst, the Zagat’s points to an MD’s customer service appeal more than his/her technical service abilities, which could be damaging to reputations as you pointed out [the Zagat’s is based on measures of “trust, communication, availability and cost”].

The real health outcomes – the technical ones – require careful measurement and for our chronic conditions and acute/post-acute care, attribution to the care “team” – so, to disparate providers. Dr. McGill, above, stated he wished this data were available when making his referrals, which is fantastic. Objective individual level data would a first step to helping providers help each other and their patients, but there is still this problem of operating in practice silos. I wonder if we’re not asking MD’s to solve too much on their own.

Thanks for a great presentation at the Harvard School of Public Health today!

Anonymous said...

Given a choice between subjective and objective physician ratings, subjective ratings are the more valuable. Patients recognize a great physician when they see one, and should be able to share this information.
In fact, they already do. Chat rooms and blogs have proliferated all over the internet to informerly rate physicians, not only subjectively but also unencombered by any statistical power.
Any improvement in this process would be a win.
We already employ many metrics to assess the technical competency of doctors, for example board certification.
We need measures of physicians' "bed side manner". Indeed, the physicians themselves would benefit from receiving such feedback almost as much as would their patients.

e-Patient Dave said...

I was all set to pounce on this story and blog about it, because I'd have seventeen layers of concern about what kind of person was judging a doctor's quality. (As a nightmare example, read the notorious When the Patient is a Googler and my retort When the Patient is a Yahoo.)

But then I read it and saw it wasn't about rating competence, it was "categories like trust and communication." "Oh!," thought I. "That's different."

It is indeed a dangerous area, as Lachlan says. We who READ the ratings need to wise up about what we're reading and realize it's just one input, and it could be screwy. And so must anyone who's being rated.

Look, people talk a lot about how good Angie's List is; heck, Angie herself spoke at Connected Health last fall. But I've had two seriously bad experiences from selecting home service providers who were highly rated on Angie's list. My takeaway is the opposite of Lachlan's: his view of his colleague was adversely affected by that one bit of feedback, but my view OF the feedback was adversely affected by my first-hand experience. And any rating is just one input, which could be off.

Another view: on eBay and Amazon, a very large number of high or low ratings is pretty reliable.

I don't know how it'll all shake out. Being publicly judged by others is challenging at first; I've come to accept that some people are crazy and there's no accountin' for tastes, let alone the variability of provider or reviewer having a bad day. (As a professional seminar instructor, I got rated every day by every participant, and sometimes it was not pretty. I argued like crazy with my boss at times, but eventually I learned to be responsible for how people experienced me, and I became a better speaker as a result.)

Result today: in the comments about my call for a Patients Speakers Bureau, I said we too should expect to be rated: "we ought to set things up so we can get rated, just as many folks want to rate doctors. Plus, I know from career work that speaker ratings help speakers improve."

Lachlan Forrow, MD, FACP said...

I love the Amazon rating system (and others like it) and can hardly bring myself to buy anything anymore without checking out those ratings (in addition to the more systematic ones by Consumer Reports). And I'm not at all fazed when a small percentage of the raters trash the product (and when there are 500 ratings, 5% awful means 25 nasty criticisms).

I think there is something very different about published, on-line individual personal assessments by patients of a named physician. As I write this, there is at least a part of me that is thinking that if every time I saw a patient I knew that afterward that patient might go to the web and post a personal comment about me and the encounter, and that (given the rules of MD-pt confidentiality) I couldn't respond in any way, then I don't think I would want to practice medicine. Or even if I did, I think I would find it far harder to be a good doctor.

Now if every patient I saw posted a comment, or a scientifically-valid representative sample did, then my anxiety goes down at least a little. The fact that the people who will post those assessments, at least in the early years of any system, are not even remotely likely to be representative in the way that results from a more systematically-designed review process would be, makes this feel very, very dangerous.

Of course, I say this from the perspective of a physician who would be anxious about what people might write about me.

But all of this discussion is so far pretty theoretical, and I think some evidence would help ground us. Does anyone know how the similarly-controversial systems for on-line, anonymous students comments about individual teachers are evolving?

--Lachlan

e-Patient Dave said...

Thanks for a great comment, Lachlan.

Honestly, it feels a little awkward, being in favor of this new world of "participatory medicine." We have new roles to learn. While I see a patient role that brings greater participation, it's participation in partnership - and I don't just mean partnership like "you're on the other side of the table and I'll be your partner," I mean like "We're in this together." For me that brings a sense of caring for and about each other's needs.

And yeah, it feels VERY odd (presumptuous?) to be saying I care about my physicians' needs, but I do.

Unknown said...

Everyone is afraid of being assessed - it pulls away the shroud of mystery and exposes good and bad alike.
Sure it's difficult and assessing a doctor and his care is challenging and different from assessing how good an oil change was. But
1) We have to start somewhere and adapt as we learn more about this concept
2) It is currently being done anyway and with little consistancy or transparency
3) Like it or not much of the rating is about the overall experience, not so much the care or the doctor but the decor, cleanliness, friendliness & helpfulness of the staff, the quality of the food..... I know I'll get some backlash on this but to quote a recent discussion with a specialist radiologists in cancer care "the value measurement has changed: it used to be measured based on whether you were carried out in a box or walked out, now we are so much better and more successful the measure of success is about everything else, food, decor, the linen"
4) Having seen some of the shocking comparisons of success/failure rates in different hospitals for the same conditions even taking account of a different case mix I would definitely want some indicator on quality/comparison to help make my choice for obtaining what I believe to be the best care for myself.
So get over it and lets get the ball rolling.

Nick van Terheyden, MD

Chief Medical Officer

M*Modal

www.mmodal.com

nvt@mmodal.com

drnic1@gmail.com
Twitter

RSS Feed - Speech Understanding

RSS Feed - Navigating Healthcare

M*Modal - Speech Understanding - www.mmodal.com

Anonymous said...

The elephant in the room is the lack of incentive to complete rating questionnaires on the part of patients who have "average" or "typical" experiences with the provider they're rating.

Slate did a nice analysis on the reason why no one really uses physician-rating tools.

http://blog.getbetterhealth.com/online-physician-ratings-where-is-the-value-proposition-for-the-respondents/2008.12.01