The current crisis of veracity in biomedical research, and having less than a half of preclinical studies reproducible \cite{Begley_2012,Prinz_2011} is truly a crisis, has spilled from a discussion in scientific journals \cite{Begley_2012,Casadevall_2010,Collins_2014,Fang_2012,Freedman_2015,Ioannidis_2005,Ioannidis_2017,Leek_2016} into the pages of national newspapers \cite{Angell_2009,Glanz_2017,Carey_2015} and popular books with provocative titles \cite{Harris_2017}. This development suggests that scientists might need to put their house in order before asking for more money to expand it.
The approaches that have been tried or proposed are: calling on scientists to be better and “publish houses of brick, not mansions of straw”
\cite{Kaelin_Jr_2017}, perhaps under the scrutiny of video surveillance in the laboratory
\cite{Clark_2017}, requiring raw data and additional information when submitting an article
\cite{Editorial_2017} or a funding report (
https://grants.nih.gov/reproducibility/index.htm), and establishing reproducibility initiatives that replicate prior studies to serve as a deterrent for future abuse of scientific rigor. One of these initiatives, Reproducibility Project: Cancer Biology, was organized following the report that only 6 out of 53 landmark cancer research studies could be verified
\cite{Begley_2012} and set to replicate 50 cancer research reports out of 290,444 published by the field between 2010 and 2012
\cite{Errington_2014}. The reports on replicating the first seven studies have been published this year in eLife
\cite{Aird_2017,Kandela_2017,Mantis_2017,Horrigan_2017,Shan_2017,Showalter_2017}.
We would like to use these reports to suggest how the credibility crisis can be solved effectively and at a relatively small cost by assigning each published scientific claim a simple measure of veracity, which we call the R-factor \cite{Nicholson_2014}, with R standing for reproducibility, reputation, responsibility, and robustness.
The R-factor is based on the same basic rule of science that underlies the replication initiatives, namely that a scientific claim should be independently confirmed before accepting it as fact. Hence, it is calculated simply by dividing the number of published reports that have verified a scientific claim by the number of attempts to do so; to emphasize, this calculation excludes citations that merely mention the claim without testing it. The result is a number between 0 (claim is unverified) and 1 (claim is confirmed). The R-factor of an investigator, a journal, or an institution would be the average of the R-factors of the claims they reported.
The R-factor is also based on another principle of science, that new research should proceed from a comprehensive understanding of previous work. Following this principle is becoming more difficult because the sheer number of publications overwhelms even experts, thus making their expertise even more narrow. The R-factor would help to solve this problem not only by providing a measure of veracity for a published claim, but also by indicating the studies that verified or refuted it.
Let us illustrate the approach we propose using three of the cases evaluated by the reproducibility project and then discuss how using the R-factor can help make biomedical research more trustworthy.