Groopman On Comparative Effectiveness

Jerome Groopman has an article in The New York Review of Books that outlines some of my misgivings about comparative effectiveness research:

Over the past decade, federal “choice architects”—i.e., doctors and other experts acting for the government and making use of research on comparative effectiveness—have repeatedly identified “best practices,” only to have them shown to be ineffective or even deleterious.

For example, Medicare specified that it was a “best practice” to tightly control blood sugar levels in critically ill patients in intensive care. That measure of quality was not only shown to be wrong but resulted in a higher likelihood of death when compared to measures allowing a more flexible treatment and higher blood sugar. Similarly, government officials directed that normal blood sugar levels should be maintained in ambulatory diabetics with cardiovascular disease. Studies in Canada and the United States showed that this “best practice” was misconceived. There were more deaths when doctors obeyed this rule than when patients received what the government had designated as subpar treatment (in which sugar levels were allowed to vary).

There are many other such failures of allegedly “best” practices. An analysis of Medicare’s recommendations for hip and knee replacement by orthopedic surgeons revealed that conforming to, or deviating from, the “quality metrics”—i.e., the supposedly superior procedure—had no effect on the rate of complications from the operation or on the clinical outcomes of cases treated. A study of patients with congestive heart failure concluded that most of the measures prescribed by federal authorities for “quality” treatment had no major impact on the disorder. In another example, government standards required that patients with renal failure who were on dialysis had to receive statin drugs to prevent stroke and heart attack; a major study published last year disproved the value of this treatment.

There are lots of reasons why this might be. Medicine, as my physician friends occasionally say, is complicated. Even experts follow trends. And the experts making the decisions may have incentives other than cost and patient benefit which impel them to impose the standards they do.

Indeed, I continue to wonder if changes that nudge incentives in a less perverse direction than they do now and which allow a certain amount of flexibility might result in both better outcomes and lower costs than our current system.

Hat tip: Mickey Kaus

4 comments… add one
  • Andy Link

    Plus the fact that people are not machines with identical working parts that can be treated the same in every case.

  • That’s my interpretation of the “medicine is complicated” point.

  • steve Link

    I think you have this mislabeled. This is really best practices or evidence based medicine. Odd that they left off things like the reduction of central line infections, the use of pre-op checklists and giving antibiotics on time.

    I forget the name of the fallacy at play here, but this is really just a case of not being perfect. In fact, the system is working. They did studies. They made recommendations based upon those studies. THEN, they followed up on them. When they did not get the expected results, they altered or stopped the practices. A non-physician or scientist may focus on not getting it right the first time. Those of us in practice know it takes lots of follow up to find out what really works. In my field, as an example, we continue to debate the merits of perioperative beta blockers. I guess we could give up because we did not get it right the first time, but that does not seem right to me.


  • I merely followed the lead of the author of the article cited. That’s how he characterized it; I did the same.

Leave a Comment