- Reproducibility of the literature research (apart from computational, empirical,
- At least studies should be reproducible, better would be replicable (however there is an overlap between the two concepts)
- it's impossible to tell exactly the percentage of work that is reproducible, depending on the discipline 15-60%
- Even for their own research, many researchers cannot reproduce it (bad archiving)
- Usually the direction/rough magnitude of the effects are true that are reported in papers, but the significance can rarely be shown
- problems with p-values: 1. wrong interpretation (p-value hacking), 2. cultural: "the measure becomes a target" (Goodheart).
Matin Vetterli
Laurent Gatto
- "Research is the by-product of researchers getting promoted" (David Barron)
- Senior researchers themselves profited from the system... why would they want to change it?
- For peer review: Simply send back if not able to reproduce
Jessica Polka
- 48h screening process: is the pre-print paper "science" (ethical e.g.)
- sherpa/romeo: publisher policy database
- Preprint editors: Possibility for a better filtering step compared to only Peer Reviews
Lawrence Rajendran
- $5M for a university to get access to one publisher! => results in a strict selection of publishers (since many unis cannot afford more than a few) 1858 article still under $38 paywall
- "Flemming could not publish his findings nowadays" ==> Reviewers incentivize stories vs. substance. Incremental research is criticised.
- Science matters allows to publish single observations in triple blind peer review.
- Reproducibility score instead of impact factor
- Technical quality (statistics, controls, sample size) score 0-10, Scientific impact/novelty, conceptual advance
- How long does review process take? Answer: fastest 24h, longest: 1 month
- pay something around $30 to reviewers
CC licenses:
- closed (c) all rights reserved (default)
- CC by: you can disseminate (non derivative, non commercial) (By means you have to credit the author)
- CC by SA: you can use it but make it open again
- CC 0: completely open
CC licenses are machine readable
Sünje Dallmeier-Tiessen
- There are several repositories to share data: re3data.org (aggregate of repositories), zenodo, Open Software Foundation (OSF, US funded)
- OCIR helps linking your name to your data to get appropriate credit
Victoria Stodden
- Interestingly fear of scooping is smaller than the problem of additional work required
- HPC has interestingly much more reproducibility ==> it's a cultural issue
- GPL license for code
- Creative commons: for all kinds of things (music, images,...)
Gael Varoquaux
- MVC for research software!
- Code is not software: Sofware solves a well-defined problem, Code can solve problems in research (which usually not well defined)
- most likely your PhD does not qualify for a library
- Join someone else on a project, instead of building your own!
Questions:
- If pre-print becomes more popular and everyone can publish anything, do university affiliations/names become more important? Is it only those people that can afford to be open science? Answer: We need to accept different forms of 'open' and not judge efforts that are not as extreme and holistic as the (in our opinion) best open science. There have been efforts in positive discrimination (holding some people to a higher standard)
- Technology doesn't play a role but what about tech skills? I see that many more people in CS are enthusiastic compared to Bio.
- Is publishing on bioarxiv making the paper less interesting for journals? Answer: Journals have policies (media embargos) in place that should avoid the author to contact the media directly before the paper has been officially published.
- Preprint editor bias? Answer: Just suggestions, no automatic release on PLOS
- How much visibility are the papers getting compared to journals?
- Graphic explanation drops
- Number of figures in science matters compared to other journals?
- Matters select vs. Matters naming
- Putting data on github? Github prevents files >50MB