The War on Science

Science is, apparently, losing. But not in the way you might think. Looks as though there’s a lot of phony “science” out there:

NEW YORK (Reuters) – A former researcher at Amgen Inc has found that many basic studies on cancer — a high proportion of them from university labs — are unreliable, with grim consequences for producing new medicines in the future.

During a decade as head of global cancer research at Amgen, C. Glenn Begley identified 53 “landmark” publications — papers in top journals, from reputable labs — for his team to reproduce. Begley sought to double-check the findings before trying to build on them for drug development.

Result: 47 of the 53 could not be replicated. He described his findings in a commentary piece published on Wednesday in the journal Nature.

I think that a reasonable working hypothesis is that, at least in bio-medical science, the peer review system is broken. I don’t entirely know why that is. The incentives for novel research are pretty high but is that sufficient to explain this result.

Maybe some areas of science have reached a point of specialization at which there are no peers—nobody knows enough about what anybody else is doing to provide adequate review. Maybe the cost of duplicating results is sufficiently high that few even bother. Maybe the volume of what’s being published is so high that a lot of bogus findings slip through. I really don’t know.

It certainly isn’t a confidence-builder. And people wonder why there are so many who don’t trust scienists.

I note, however, that the finding does support an observation I made back when I was a grad student. If you take ten grad students and assign half to prove something and the other half to disprove something, all of them will succeed.

Update

Arnold Kling suggests a remedy:

We can fix this problem. If government and other funders of research were to shift more resources toward replication, this would do two things. First, it would catch more bad science sooner. Second, it would take away some of the incentive to do bad science, because it would raise the risk of getting caught.

I’m skeptical. As far as I know nobody ever got a Nobel Prize for replicating somebody else’s experiments. Has anyone ever received a Nobel for being unable to replicate somebody else’s experiment? I doubt it. Do people get doctorates for replicating other people’s experiments or failing to replicate them? I doubt that, too.

Replicating experiments sounds to me like Congressional ethics. Everybody’s in favor of it but it’s such a thankless task that nobody wants to pursue it.

Redundancy is pretty hard to sell to most businesses even if it does result in more durable systems.

10 comments… add one
  • Jimbino Link

    Can you win the Nobel by duplicating somebody else’s experiments? I don’t think you framed the question right. Nobody cares about the particular experiment, except to the extent that it validates the theory.

    Yes, indeed, you can gain the Nobel by performing a different experiment to validate the same theory, whether the theory pertain to speed of light, Maxwell’s equations, special relativity, general relativity, dual nature of light, etc.

  • Not exactly. Michelson and Morley received the Nobel Prize for inventing the interferometer, useful for more than measuring the speed of the earth relative to the aether. And they didn’t fail to replicate somebody else’s experiment but produced a novel experiment of their own that disproved a standing theory. I thought of the Michelson-Morley experiment specifically when I wrote this post.

    I think my point still stands. I don’t think there will ever be enough incentives in replicating other people’s experiments that it will offset the perceived rewards of finding some breakthrough, particularly in biomed. Unlike others who’ve commented on this news article I don’t infer that the studies were fraudulent. I think they were sloppy.

    Actually, I think there’s another interesting aspect of this article: what made the 53 studies landmark? Important findings? What we may be seeing is that important findings are becoming increasingly hard to come by, something I’ve written about before here.

  • Rich Horton Link

    “The incentives for novel research are pretty high but is that sufficient to explain this result.”

    I’m pretty sure the answer is yes. IN the articles there is this exchange:

    “Part way through his project to reproduce promising studies, Begley met for breakfast at a cancer conference with the lead scientist of one of the problematic studies.
    “‘We went through the paper line by line, figure by figure,’ said Begley. ‘I explained that we re-did their experiment 50 times and never got their result. He said they’d done it six times and got this result once, but put it in the paper because it made the best story. It’s very disillusioning.'”

    The “best story” for what do you think? Obviously, the best story for continued funding. “Discoveries” real or imagined lead to more and better grant proposals. Bringing in the grant money is an ever increasing component of university employment. Not bringing in enough can be reason to deny tenure, not to mention getting these grants can also bring financial benefits (modest usually, but real) for the researchers themselves. Additionally, there is very little risk involved in the production of research which is bad science but good at attracting grant money. You can see how the out-right malfeasance we see from time to time (especially in bio-med) is encouraged. “Look at all the bad science and the shortcuts and the omitting of data going on. Maybe fudging a few numbers wouldn’t be so bad…”

  • If that’s the case, it seems to me that the prognosis is very bad, indeed. Basically, it says that science is dead. Note that he found that most of the results could not be repeated.

  • Rich Horton Link

    It could be that the Research University model is simply wrong. The idea was we would have universities enagage in purely theoretical research (or at least more purely theoretical), in many cases the type of research businesses wouldn’t do because there didn’t exist a clear cut profit motive. (The more theoretical something is the less likely it would be to pan out.) So, the eggheads would do their work free from any interference, and when they actually discovered something (“Eureka!”), business could take it from there.

    However, in practice this has never worked as smoothly as it looks on paper. For starters, universitiy scientists do not work free from interference. Funding, for example, is not free and open to any scientific inquiries. Particular lines of research are often incentivized to the detriment of other lines. Scientist A may have a great idea that could lead to a drug to treat disease X. However, the government deems disease Y as being more important (it affects many more people than disease X, lets say.) The result is there is no grant money available to support reseach into disease X. Scientist A may have no real choice but to back burner more promising research in order to chase the funding.

    Other forms of interference could be the governments support for particular policy goals, which winds up favoring certain kinds of research over others or even certain kinds of findings over others.

    These types of interferences can be limited by moving away from categorical grants in the sciences and moving towards more of a block grant system. (It would probably be a good idea for the humanities and social sciences too.) This might not totally reform the current incentive system (and certainly the culture of lax standards may prove resistant to quick change/reform), but it would be a step in the right direction.

  • Brett Link

    It’s mostly that duplicating medical experiments is expensive, and there’s little glory in duplication (as you said). That so many of them were wrong isn’t surprising – you’d expect a fair number of them to get shot down after duplication fails, particularly with something so complex and difficult as medical science. It’s how the scientific process works.

  • “Best story,”…Arjo Klamer call your office!

    That was one of the key points my history of economic thought professor hammered us with day-after-day. It isn’t great science that will get you far, but that you must have a good narrative.

    Case in point: Rational Expectations….

    This concept was first put forward by John Muth. He applied to agriculture futures markets. Boring. Solid novel work, but boring. It went nowhere…until Leonard Rapping and Robert Lucas used the concept in their work on monetary theory. In economics, it was sexy ass shit.

    Muth, tenured professor. Lucas tenured professor and Nobel Prize winner. If you don’t have a good story, it can be great science but chances are it wont go very far.

    “‘We went through the paper line by line, figure by figure,’ said Begley. ‘I explained that we re-did their experiment 50 times and never got their result. He said they’d done it six times and got this result once, but put it in the paper because it made the best story. It’s very disillusioning.’”

    Statistically speaking it was a fluke. Begley and his team tried 50 times and failed. The previous group got lucky with six trials. It is really nothing more than doing a large number of coin flips, seeing a long sequence of say tails, noting that, publishing it, and calling it good science. In the end, its shit. Peer reviewed, but published shit.

    Kling is right, more emphasis should be put on replicating and reviewing scientific work. Data should be put into the public domain. Statistical programs, algorithms, everything, into the public domain. Any type of failure to do this should mean summary rejection of the article at any journal. Of course, this wont happen. Because the scientific journals are, in the end, not unlike mass media. If you got what looks like “sexy ass shit” you publish it before the authors toddle off to another journal.

    Most cases there is just luke warm attempts to put data and algorithms out there. Scientists often resist strenuously, which is not unreasonable because when somebody says, “Hey, I’d like your data and algorithms so I can see how you got your results,” they hear, “I’m checking to make sure you did it right, and if you didn’t I might make a big deal about it.”

    Case in point, Michael Mann, et. al. and the hockey stick. Bet you’ll get some posts on this one. 🙂

  • Data should be put into the public domain. Statistical programs, algorithms, everything, into the public domain.

    I think that’s the crux of the problem. An enormous emphasis and power and have been bestowed on intellectual property. The temptation to cheat may well be overwhelming.

  • steve Link

    It costs a lot to duplicate medical studies. Many are funded by pharma or a device maker. They have no incentive to redo studies. If you want these funded, it will have to be government funding. On the clinical side, this is why most of us like to see studies duplicated before we jump on board with new therapies. The FDA haters are always pushing to get results and new drugs out faster. They forget that the incentives are already lined up in favor of cutting corners.

    Steve

Leave a Comment