Jerome Groopman has an article in The New York Review of Books that outlines some of my misgivings about comparative effectiveness research:
Over the past decade, federal “choice architects”—i.e., doctors and other experts acting for the government and making use of research on comparative effectiveness—have repeatedly identified “best practices,” only to have them shown to be ineffective or even deleterious.
For example, Medicare specified that it was a “best practice” to tightly control blood sugar levels in critically ill patients in intensive care. That measure of quality was not only shown to be wrong but resulted in a higher likelihood of death when compared to measures allowing a more flexible treatment and higher blood sugar. Similarly, government officials directed that normal blood sugar levels should be maintained in ambulatory diabetics with cardiovascular disease. Studies in Canada and the United States showed that this “best practice” was misconceived. There were more deaths when doctors obeyed this rule than when patients received what the government had designated as subpar treatment (in which sugar levels were allowed to vary).
There are many other such failures of allegedly “best” practices. An analysis of Medicare’s recommendations for hip and knee replacement by orthopedic surgeons revealed that conforming to, or deviating from, the “quality metrics”—i.e., the supposedly superior procedure—had no effect on the rate of complications from the operation or on the clinical outcomes of cases treated. A study of patients with congestive heart failure concluded that most of the measures prescribed by federal authorities for “quality” treatment had no major impact on the disorder. In another example, government standards required that patients with renal failure who were on dialysis had to receive statin drugs to prevent stroke and heart attack; a major study published last year disproved the value of this treatment.
There are lots of reasons why this might be. Medicine, as my physician friends occasionally say, is complicated. Even experts follow trends. And the experts making the decisions may have incentives other than cost and patient benefit which impel them to impose the standards they do.
Indeed, I continue to wonder if changes that nudge incentives in a less perverse direction than they do now and which allow a certain amount of flexibility might result in both better outcomes and lower costs than our current system.
Hat tip: Mickey Kaus