“I explained that we re-did their experiment 50 times and never got their result. He said they’d done it six times and got this result once, but put it in the paper because it made the best story. It’s very disillusioning.”

I keep trying to teach my statistics students that you have to decide on certain things ahead of time – like how unusual a result has to be in order to be “statistically significant,” how you pick your sample, etc. But as Mark Twain allegedly said, there are three kinds of lies: lies, damned lies, and statistics.

Here’s a little more of the background, and since it has to do with cancer treatments, it’s more than a little disturbing:

A former researcher at Amgen Inc has found that many basic studies on cancer — a high proportion of them from university labs — are unreliable, with grim consequences for producing new medicines in the future.
During a decade as head of global cancer research at Amgen, C. Glenn Begley identified 53 “landmark” publications — papers in top journals, from reputable labs — for his team to reproduce. Begley sought to double-check the findings before trying to build on them for drug development.
Result: 47 of the 53 could not be replicated. He described his findings in a commentary piece published on Wednesday in the journal Nature.
“It was shocking,” said Begley, now senior vice president of privately held biotechnology company TetraLogic, which develops cancer drugs. “These are the studies the pharmaceutical industry relies on to identify new targets for drug development. But if you’re going to place a $1 million or $2 million or $5 million bet on an observation, you need to be sure it’s true. As we tried to reproduce these papers we became convinced you can’t take anything at face value.”
The failure to win “the war on cancer” has been blamed on many factors, from the use of mouse models that are irrelevant to human cancers to risk-averse funding agencies. But recently a new culprit has emerged: too many basic scientific discoveries, done in animals or cells growing in lab dishes and meant to show the way to a new drug, are wrong.
Begley’s experience echoes a report from scientists at Bayer AG last year. Neither group of researchers alleges fraud, nor would they identify the research they had tried to replicate.
But they and others fear the phenomenon is the product of a skewed system of incentives that has academics cutting corners to further their careers.
George Robertson of Dalhousie University in Nova Scotia previously worked at Merck on neurodegenerative diseases such as Parkinson’s. While at Merck, he also found many academic studies that did not hold up.
“It drives people in industry crazy. Why are we seeing a collapse of the pharma and biotech industries? One possibility is that academia is not providing accurate findings,” he said.
Other scientists worry that something less innocuous explains the lack of reproducibility.
Part way through his project to reproduce promising studies, Begley met for breakfast at a cancer conference with the lead scientist of one of the problematic studies.
“We went through the paper line by line, figure by figure,” said Begley. “I explained that we re-did their experiment 50 times and never got their result. He said they’d done it six times and got this result once, but put it in the paper because it made the best story. It’s very disillusioning.”
Such selective publication is just one reason the scientific literature is peppered with incorrect results.
For one thing, basic science studies are rarely “blinded” the way clinical trials are. That is, researchers know which cell line or mouse got a treatment or had cancer. That can be a problem when data are subject to interpretation, as a researcher who is intellectually invested in a theory is more likely to interpret ambiguous evidence in its favor.
The problem goes beyond cancer.
On Tuesday, a committee of the National Academy of Sciences heard testimony that the number of scientific papers that had to be retracted increased more than tenfold over the last decade; the number of journal articles published rose only 44 percent.
Ferric Fang of the University of Washington, speaking to the panel, said he blamed a hypercompetitive academic environment that fosters poor science and even fraud, as too many researchers compete for diminishing funding.
“The surest ticket to getting a grant or job is getting published in a high-profile journal,” said Fang. “This is an unhealthy belief that can lead a scientist to engage in sensationalism and sometimes even dishonest behavior.”
The academic reward system discourages efforts to ensure a finding was not a fluke. Nor is there an incentive to verify someone else’s discovery. As recently as the late 1990s, most potential cancer-drug targets were backed by 100 to 200 publications. Now each may have fewer than half a dozen.
“If you can write it up and get it published you’re not even thinking of reproducibility,” said Ken Kaitin, director of the Tufts Center for the Study of Drug Development. “You make an observation and move on. There is no incentive to find out it was wrong.”

Advertisements