A number of experts and other folks have criticized the methodology used by ProPublica to indicate the relative rate of complications for surgeons across America.
Here's the issue in a nutshell, as I see it. There is a rigorous methodology available for evaluating surgical outcomes. It is from the American College of Surgeons, and it is called NSQIP. It is indeed the "leading nationally validated, risk-adjusted, outcomes-based program to measure and improve the quality of surgical care in the private sector."
Look at what the program offers to surgeons:
Surgeons who use ACS NSQIP receive:
So there we have it. We could all have a rigorously derived comparison tool, but since the profession chooses not to make it available, we must have a surrogate of the sort that ProPublica used in its article. Or nothing at all. What would be your choice?
Here's the issue in a nutshell, as I see it. There is a rigorous methodology available for evaluating surgical outcomes. It is from the American College of Surgeons, and it is called NSQIP. It is indeed the "leading nationally validated, risk-adjusted, outcomes-based program to measure and improve the quality of surgical care in the private sector."
Look at what the program offers to surgeons:
Surgeons who use ACS NSQIP receive:
- Better data for more targeted decision-making:
- Peer-controlled, validated data from patients’ medical charts lets surgeons quantify 30-day, risk-adjusted surgical outcomes, including post-discharge, when nearly 50 percent of complications occur.
- A variety of program options tailored to your hospital’s size and quality improvement interests.
- Robust reports that provide performance information to guide
surgical care and identify areas for improvement for the greatest return
and highest impact:
- Continuously updated hospital performance reports and benchmarking analyses available in real time.
- Nationally benchmarked and risk-adjusted reports provided semiannually.
- Maintenance of Certification (MOC) Part IV credit for all surgeons at hospitals participating in the program.
- Best practices tools, including Case Studies and evidence-based guidelines developed by ACS.
- Opportunities to participate in regional and virtual collaboratives with other hospitals.
- Preoperative risk calculator:
- Online tool helps clinicians make evidence-based decisions, and helps set reasonable patient expectations.
- Takes into account patient risk factors like age and BMI for a growing number of common surgical procedures.
- Better predictive ability than most other models.
So there we have it. We could all have a rigorously derived comparison tool, but since the profession chooses not to make it available, we must have a surrogate of the sort that ProPublica used in its article. Or nothing at all. What would be your choice?
13 comments:
I would like to preempt the inevitable rejoinders that if the data is not held confidential then surgeons won't participate, etc. etc. It is more than a bit hypocritical to complain about ProPublica's methodology when there are better methods out there which are being withheld.
The sooner our profession realizes that transparency to consumers in all things is an unstoppable force and takes the lead in initiating it, the better.
Agreed with nonlocal MD. There is no excuse at all from the medical profession or the admins to keep all this secret and use the juggernaut of profits to hide and lobby for it. Bust it wide open. You will see quality because the bleeding of $$$$ will stop. Those docs who shouldn't be docs can be forced into other lines of work.
Had the medical profession done it a long time ago and truly policed their own, there would not be the hostility towards the profession that there is. They earned that one.
You can always ask the nurses.
Your faith in the NSQIP data may be misplaced. In my opinion, NSQIP has some major flaws.
The data submitted from each hospital is not comprehensive. Depending on the level of participation desired by the hospital, cases are selected, and in all but one category, the maximum number of cases submitted per year is 1680. In addition, all hospitals do not collect the same levels or amounts of data. (See https://www.facs.org/quality-programs/acs-nsqip/program-specifics/progoptions.)
The good news is that the data is not administrative, but rather is clinically oriented. However, each hospital assigns a nurse to collect it. I don't think the inter-rater reliability of the data collectors has ever been studied.
If you read published papers that have used NSQIP, you will find that missing data is a problem in many studies.
Many hospitals do not participate in NSQIP.
What, then, do you suggest?
That is a good question. In order to do this right, every hospital would have to participate in a clinical data collection system that could possibly be based on the framework of NSQIP. How would it be funded?
Maybe only certain procedures should be studied every year on a rotating basis.
It gets complicated. Take laparoscopic cholecystectomy for example. The Surgeon Scorecard only included inpatient fee-for-service Medicare patients. Only a few general surgeons performed 20 such cases in the five years studied which makes comparisons impossible.
To properly assess the complication rates for laparoscopic cholecystectomy, outpatient (day surgery) cases would have to be included. However, some complications do not require hospital admission and tracking patients who are admitted to different hospitals when they have complications is very difficult.
However, the complication rate for laparoscopic cholecystectomy even in the Surgeon Scorecard higher-risk inpatient population was only 4.8%. That is such a small number that statistically significant differences among surgeons may not occur.
Operations with higher complication rates, such as pancreatectomies, are not performed frequently enough to produce meaningful data.
Bottom line: I do not know how to measure complication rates in a way that could be acceptable to all concerned parties.
Yes, I fear that Skeptical Scalpel is missing the entire point, no? Without a viable alternative people are going to use what they have at hand.
From a surgeon’s perspective, I would worry about the quality of the risk-adjustment mechanism. If it doesn’t fully capture risk related to age, multiple co-morbidities, socio-economic status and the like, it seems that there would be a clear incentive to avoid treating the riskiest patients so as not to penalize outcomes scores.
From a patient’s perspective, though, I’m more interested in whether or not the surgeon is board certified, how long he’s been practicing and, more importantly, how many procedures of the type that I need does he do each month or each year and how does that compare to the number the specialty society says is needed to keep skills sharp. Transparent, relevant information is needed so patients can make informed choices especially with respect to procedures where the consequences of mistakes can be severe.
Finally, complications that occur after discharge may not be the surgeon’s fault at all. Instead, inadequate hospital discharge planning may be the culprit.
Brilliantly argued, Paul! Talk about a profession shooting itself in the foot. Bruce Keogh, a Cardiac Surgeon now Medical Director for NHS England, has campaigned for publication of outcomes by surgical team and has achieved that in his own specialism and most recently there has been a limited publication of surgical mortality rates across all specialisms too.
Unless your medical colleagues follow suit and drive the publication agenda, they will get ProPublica and others doing it for them.
The perioperative risk calculator is a very useful tool. I use it all the time, especially when contemplating anesthetizing the proverbial "little old lady" with a hip fracture. The calculator is available to the public at www.riskcalculator.facs.org.
Gosh, I don't know who's points I agree or disagree with because they all were excellent this is so interesting thank you Mr. Levy.
You need to check out this paper on the NSQIP dataset, because it seems that it would be pretty difficult to create reliable measures of quality. The NSQIP set is probably a very good way of providing timely feedback to the participating surgeons, but not for differentiating performance reliably.
Just discovering this thread now. (It was a rough summer.)
I agree completely with Ashish Jha. My first industry, typesetting machines, was disrupted by desktop publishing ten years before Clay wrote that book; I lived it, including the disbelief and worry of good people in the trade that the world was going to hell and their craft, learned over a lifetime, was falling into the hands of people who had no clue. In my speeches these days I sometimes do a Jack Nicholson-style "You can't HANDLE the Helvetica!"
I'll add one thing to Jha's quote: yes, the disruptor is inadequate - but as it gains tons of adopters and adds capability, the additions are driven by what the consumer finds valuable, which sometimes coincides with the incumbent's view and sometimes doesn't.
I was on a webcast this week supposedly about the "consumerization" of healthcare, sponsored by AHIP, the industry industry organization. There was a palpable difference between what industry people talk about (largely "getting patients to do things") and what autonomous patients themselves actually discuss at Medicine X, the only conference in the world that is completely patient generated.
At Health 2.0 this week, Alexandra Drane put up two slides that brilliantly illustrate the difference in perspective of the healthcare industry and the patient. Susannah Fox tweeted pix. Look: the industry (including surgeons I'm sure) view the issues in the buckets on the left; the person with the problem (the patient) experiences it as shown on the right. My whole point is that when consumers start running things, solutions morph to be oriented to what THEY want, not what the people on the left think about. And THAT is disruption.
Those slides: tweeted pix
Post a Comment