Sunday, February 23, 2014

A call for more reproducible research in drug discovery/development

Phase III drug trials are typically randomized, blinded, controlled clinical trials.  However, last month in JAMA, Djulbegovic et al. (2013) argued "more than 80% of phase 1 studies and more than 50% of phase 2 studies are currently nonrandomized."  They argue that these early phase studies should all be randomized, and that even preclinical studies in animals and cell cultures should also be randomized.  (Randomization is probably even less prevalent in preclinical research than in the early phase studies discussed in the quote.)  In other words, the authors advocate study designs that encourage reproducibility across the spectrum of clinical and preclinical research.

They argue that non-randomized studies can easily lead to incorrect decisions, both pro and con.  Thus randomized studies are more efficient and provide stronger backing for decision making.  (They also make the case that randomized studies are more ethical; I'm not sure I find their reasoning here as compelling.)  They hope that use of more rigorous study designs across the drug development arena could be one way to address the industry's infamously high failure rate.

Here is a key passage from the article, describing the literature in preclinical research.

This literature yields an excess of statistically significant findings that cannot be eventually replicated, let alone translated into clinical successes.  For preclinical research conducted by the industry, routine adoption of rigorous randomized designs should be straightforward--no company wants to spend millions of dollars for the clinical testing of useless treatments.  In fact, industry researchers have taken the lead in raising the concerns about the reproducibility of preclinical research and suggesting partial solutions.  For preclinical research conducted by non industry researchers, similar rigorous practices can also be routinely adopted and requested.  Funders and journals can specify that they will sponsor and publish animal studies only if they fulfill rigorous randomization criteria.  Justified exceptions to this rule are likely to be rare.
(I have not included the footnotes; see the original.)  The major lesson for me in the above passage is that the main contribution of statistics to such studies is in the design, not the analysis.  "Statistically significant" findings by no means guarantee that the study has a chance of being reproduced, whereas good study design principles would greatly enhance the likelihood that such studies are reproducible.  Unfortunately much of the teaching and practice of statistics, both by statisticians and non-statisticians, tends to emphasize the mathematical/calculational side, rather than the study design side.

I'd like to see a devil's advocate's response to all this.  I find the authors' views compelling, and have difficulty imagining the grounds for which one might disagree.

Reference


Benjamin Djulbegovich, Iztok Hozo, and John P. A. Ioannidis, 2013:  Improving the drug development process:  more not less randomized trials.  Journal of the American Medical Association, 311 (4):  355-356.


No comments:

Post a Comment