Matteo Cantiello

and 3 more

The peer review process is a pillar of modern research, verifying and validating the ever-increasing output of academia. While the academic community agrees that some process of review is necessary to ensure the quality of published research, not everybody agrees on the best approach. In particular, doubts have been cast on the current peer review process: most journals select and assign one anonymous referee (few journals assign two or more) who is in charge of reviewing the manuscript and recommending it for publication or rejection. The argument is that the current peer review system is becoming inadequate. Here’s an incomplete list of issues: - Research is increasingly collaborative, complex, and specialized. Thus, it is less likely that one or a few referees can have the necessary expertise (and time) to properly handle many modern articles. Simply put, THE AVERAGE NUMBER OF AUTHORS PER PAPER HAS BEEN STEADILY INCREASING IN THE LAST FEW DECADES, WHILE THE NUMBER OF REFEREES PER PAPER HAS NOT. - “Publication pressure” means there is a growing number of papers to referee. This need can not be easily matched since scholars, who need to constantly publish and engage in the “funding race”, HAVE LESS TIME TO BE DEDICATED TO COMMUNITY SERVICE (in a “single referee” system the review process is very time consuming). - Given the anonymous nature of peer reviewing manuscripts, RESEARCHERS WHO VOLUNTEER THEIR VALUABLE TIME AND KNOWLEDGE DON’T GET RECOGNITION for contributing. - Cases of peer-review scams, mostly from predatory open access publishers, have grown in number over recent years. A number of journals, exploiting the publication pressure climate, accept and publish articles with LITTLE OR NO PEER REVIEW. - Similarly, there are reports of fraud in which authors review their own or close friends’ manuscripts to give favorable reviews .
Simple, mundane tasks can take huge tolls on our productivity. A growing body of research is quantitatively demonstrating the existence of willpower depletion, so don’t be a statistic. Authorea helps cut out friction associated with academic writing and editing, so you can get back to doing what you love instead of just writing about it. HERE’S THE SITUATION: You’ve spent months (or even years) running and reproducing experiments, keeping meticulous notes, collecting and annotating data, writing code, writing grant applications, presenting your progress and, occasionally, sleeping. All for that slam-dunk publication. Which, finally, you are writing up. BUT THERE ARE SOME PROBLEMS. From stylistic preferences and building consensus with your colleagues, to back-up paranoia, to re-re-re-formatting your article for the ~~reach~~, ~~next best~~, ~~achievable~~, journal of last resort, Authorea has you covered. AN AUTHOREA-TATIVE DIFFERENCE As the recent over 200-author CERN paper demonstrates, Authorea can really kill it when it comes to collaborating, on any scale. This is particularly powerful, given the well-feared notion that communication complexity increases as the SQUARE of the number of people on a project. Regardless of your level of distribution (worldwide or just down the hall), with Authorea, everyone you care to include can view, edit, comment, commit, and even upload and review data, code, and figures. Let’s consider that last point for a minute. No longer will you have to fumble for flash drives, attach and contextualize via email, or compile, edit, and view in separate programs. All the data and code associated with your figures is online in the upper left-hand folder for your collaborators to play with. Further, if your document is public, members from the wider Authorea community can comment, verify, and even fork it - increasing your FF and contribution to science. Pretty sweet. Authorea, given the oft-made comparisons to GitHub and Google Docs, also helps with versioning updates and the distributed editing of your manuscript. Let’s say your PI isn’t thrilled with your phrasing or explanation in the Discussion. With Authorea, you can: lock the section while you edit it (i.e. no one is looking over your shoulder, judging); commit the update for all to see (oh, how they will marvel); get real-time feedback through additional comments and edits; and, when your PI has a change of heart, you can easily revert back to the section’s previous version (by clicking on that handy “History” clock icon). So, Authorea provides a platform for collaborative writing and review of your manuscript, an easy and automated citation mechanism, a one-stop repository for all your figures’ data, code, and editing, and even lets you get pre-publication feedback from your peers. What’s more, Authorea will also format your manuscript for the journal of your choice - text, figures, bibliography and all - at a click of a button. TWO QUESTIONS: 1. Why _wouldn’t_ you use Authorea for your next collaborative publication? 2. _What would you do with the time saved_ when you’d otherwise be emailing around drafts and data, sharing and modifying code, clarifying, citing, and formatting? Let us know in the comments!
In a breakthrough victory for open access, the EU’s European Medicines Agency (EMA) approved a system last week that provides researchers and the public with the vast majority of data from clinical trials. While generally resistant to such developments, some pharmaceutical companies are already opening up their data to scrutiny. In the US, NIH’s clinicaltrials.gov hosts a similar database for voluntary submission of public and private clinical trial results. By January 1, 2015, however, all companies in the EU will be REQUIRED BY LAW to submit trial data for newly approved drugs. FDA is considering adopting such a policy, with safety and efficacy data-mining projects in mind. As the analysis from ScienceInsider notes: Published journal articles often contain the main outcomes, . . . but lack detailed data and information about study design, efficacy, and safety analysis, which might shed a different light on the results when analyzed by others; moreover, some trials aren’t published at all. The AllTrials campaign has argued that the details of every trial should be publicly available for anyone to study. Traditional publication formats disconnected from modern needs? A move toward data-rich scientific content? Opening up the process of verification and analysis to a wider audience? It’s as if science was always meant to be open, or something. Naturally, there are caveats to the ruling. Only identified researchers can download searchable trial results and data, while registered public users can only view results on-screen. Further, certain types of commercially relevant data may be redacted by companies, with the EMA providing an 18 month window before completed trial results are finalized and posted. Still, however, this represents a huge step forward for widespread access to and synthesis of information that could be critical for improving patient outcomes. Long-sought by many researchers, the beneficial network effects of open trial data have been lauded in the literature, with comparisons made to successes in the open-source community . In one example, data-sharing led to rapid analysis and determination of treatment for a deadly _e coli_ outbreak in 2011. By broadly applying standard protocols to ease the access and use of clinical trial information, researchers contend we will see huge health care improvements. Results include learning what treatments are best in which circumstances, determining contraindications faster, and increasing adoption and innovation rates in treatments. Here’s to hoping EMA’s actions are successful, FDA approves similar measures, and science in aggregate opens up to take advantage of these synergetic network effects. EMA Q&A release