The current crisis of veracity in biomedical research is enabled by the lack of publicly accessible information on whether the reported scientific claims are valid. One approach to solve this problem is to replicate previous studies by specialized reproducibility centers. However, this approach is costly or unaffordable and raises a number of yet to be resolved concerns that question its effectiveness and validity. We propose to use an approach that yields a simple numerical measure of veracity, the R-factor, by summarizing the outcomes of already published studies that have attempted to test a claim. The R-factor of an investigator, a journal, or an institution would be the average of the R-factors of the claims they reported. We illustrate this approach using three studies recently tested by a replication initiative, compare the results, and discuss how using the R-factor can help improve the veracity of scientific research.
FOOD FOR THOUGHTS check the openscienceskills document (jcolomb repo) and the presentation ("dataandcode" in the figure folder) based on this repo AUDIENCE PhD students with no coding skills ? (Medical students?) TIMING Goal is to set the workshop up for end of may and do it middle of June (19th?) TOPICS 1. Version control (git) 2. Documenting analysis (markdown) 3. Metadata and standards HOW - Getting contest (best solution to a problem) via github fork and pull - Work with participant private data with one survey before and one servey after the workshop: github to give the data (same standard), R to concatenate it and ggplot2 it! - Maybe push it to Figshare ? Data is at the same time the content we work with and the presentation of the results of the workshop.