The current choices to determine whether a scientific claim or a scientist is reliable are to consult insiders in the field, which may require certain connections to have a frank assessment and presumes that the insiders are not misled themselves, to review dozens to thousands of articles that cite the report of interest, or to replicate the study independently, which could be expensive or, at the times of financial constraints, unaffordable.
With the absence of easily accessible information and transparency about the reliability of reported claims, and the deluge of publications that can overwhelm even an expert, the careers of academic researchers are affected little by the veracity of what they publish or the lack thereof, short of scandals associated with outright fraud, but instead depend on the number of published articles, the number of citations, and the impact factors (citation indexes) of the journals (Fig. \ref{294322}, left) .
Having the R-factor indicated on the first page of the report, in much the same way as the Altmetrics logo now informs the reader how popular the study is at social networks \cite{Warren_2017}, would give anyone a numerical estimate of the report’s trustworthiness. Having this information openly and freely available, used, and discussed would enable, perhaps even force the evaluation system to consider veracity of the reports and the investigator in decisions related to their career (Fig. \ref{294322}, right), and give the public a tool to judge for themselves. Importantly for the fairness of these decisions and judgments, the R-factor would reflect not a single replication study, but the sum of all reported attempts to reproduce the claim. Likewise, the credibility of an investigator would be estimated by the average veracity of all claims that they have reported, not unlike the batting average in baseball.
Because the R-factor can change over time (Fig. \ref{900585}), and, in contrast to citation indexes that cannot decrease, not always to the better, our approach can help to change the current perception that a publication is a trophy that once acquired would shine in the resume of its author forever supported by the citation index of the journal in which the report appeared. Instead, the worry that the R-factor can change to the worse for everyone to see is likely to make the authors, especially those who do the experiments but sometimes have little say on how the results are represented in the publication, more vigorous in insisting that the data and the conclusions are verified before submitting the manuscript.
Of course, the R-factor has its share of shortcomings and by no means an ideal measure of scientific excellence and not a panacea by itself. However, as Churchill said about democracy, ”[it] is the worst form of Government except for all those other forms that have been tried from time to time.” We suggest the same can be said about the principle that scientific claims must be independently verified before accepting them as facts. The R-factor helps to apply and represent this principle in an easy to understand and easy to use form (Fig. \ref{900585}), providing a sorely missed feedback in the system that governs biomedical research.
Although calculating the R-factor for a handful of reports is relatively simple, especially to an expert in the field, the question is who will calculate the R-factors for the thousands of researchers and their hundreds of thousands or even millions of reports. While these numbers look overwhelming, they are finite. We suggest that they can be processed using two complementary approaches – the collaboration of scientists, who can calculate the R-factors for each other’s studies, and the application of machine learning technology, which brought such marvels as automatic language translation and face recognition from science fiction stories into our smartphones and has made great advances in analyzing the meaning of texts \cite{Westergaard_2017}. A field that has elicited more credibility concerns than others, with cancer research being a primary candidate, could be a place to start.
Introducing the R-factor will be disruptive as it will bruise some egos and will disrupt the comfort of some scientific administrators. We feel, however, that this disruption will benefit future patients by giving a career advantage to the creative researchers and administrators who are committed to making biomedical research more productive. This change will help to restore public trust in science, which is now trending in the wrong direction.
We invite you to calculate the R-factor for the articles you like, dislike, or the articles that puzzle you. If you would like, you can also calculate your R-factor. What is it?