Alberto Pepe on Quality Control (AMA)

The Authorea Team

On September 15th, 2016 Authorea CEO and Cofounder, Alberto Pepe, encouraged the Reddit science community to ask him anything. Below are a few excerpts from the AMA on the topic of quality control in scientific publishing. Click here for the full thread.
Read general AMA excerpts here or specific discussions on the topic of metrics/ranking/impact or Authorea's business model.
Hi Alberto,
I'm interested in understanding where the quality-control is going to come from. The current quality control system is broken--an overworked editor and 2-3 reviewers who all have shit to do other than volunteer their time to nitpick a paper skim it and write the first thing that comes to mind--but I'd like to know if you have a better system for quality assurance of research in mind.

There are thousands of manuscripts published each day. And that doesn't account for the millions of null-result papers sitting in desk drawers, or the millions of papers with less-than-rigorous research methods. How does a system like this assure readers of the quality of the work they are accessing?

Co-Founder | Authorea

Hey captainpotty- Authorea currently has a very "light" review system, which means that you can simply comment on specific sections of a paper. We built this commenting/annotation system mostly for authors so that they could discuss with their coauthors (privately or publicly). One day, however, we noticed that an Authorea user who had just finished the final draft of a climate science paper, before sending it to a journal, posted the link to it on Twitter, saying: "Now open for comments". He and his coauthors collected 60+ public comments (plus many other private ones) from the public (signed in authors as well as non-signed in anonymous). That was an example of open pre-publication peer review, or: you let anyone review your work before it is even submitted to a journal. The authors told us that they the final manuscript they ended up submitting was a lot stronger because it had already been reviewed! (and by more than 2-3 reviewers)

I understand that a major concern for this sort of system would be assuring the highest quality of the review mechanism. It does sound scary to open up the scholarly reviewing mechanism - which is by and large the filter that separates science from non-science. The truth is: the peer review mechanism is far from perfect. It has many many many problems today. What we propose (not today, but in the near future) is a "karma" system" (similar to reddit) whereby your contribution and reputation as an open peer-reviewer can be voted upon. I understand that such a system could be gamed. It is not without problems, but if it existed in addition to the current (problematic) closed peer review system, I think it would overall be a more rigorous and better reviewing system.

Professor | Human Genetics | Computational Trait Analysis 
The nearly-non-existant peer review process that many open science journals perform doesn't seem to do much to keep out trash science.
How do you think these platforms can evolve to limit the exposure and credibility given to pseudoscience? These papers can be particularly damaging when "research" driven by a politcal agenda is published (along the lines of the racist pseudoscience "journal" Open Behavioral Genetics). It can be very difficult for lay audiences to differentiate quality research from intellectually dishonest, agenda-driven pseudoscience in the absense of careful expert review. How does your platform work to manage this risk?

Co-Founder | Authorea

Hi p1percub. This is a great question, and one of definite validity. It is also a hard question to answer and while I don't have the full solution, there are some aspects that seem already clear imho on how we can build a better Peer Review 2.0.

  1. Reviews need to have increased transparency. Not necessarily by moving away from "blind" reviews, but at the very least the review's text needs to be made part of the accepted proceedings. In other words, peers should be rewarded for their reviews. In a system that works properly, peer reviews are just as important as the reviewed papers.

  2. Post-proceeding reviews may allow for the retroactive up- or downgrade correction of inaccurate proceeding reviews. Since we love to say hindsight is 20-20, the publishing process should allow to explicitly annotate papers with newly acquired knowledge years after they are accepted. We think that this is not unscientific, in fact it is the most scientific thing there is. We recently wrote a post listing 11 courageous retractions in science. Overall, a politically motivated paper may slip by the proceedings for a brief time, but once under public scrutiny it should be downgraded or even retracted.

  3. A key idea that may bring a lot of merit is "karma". The exact details are quite tricky (and extremely interesting) and I briefly (very briefly) discussed them in another answer to captainpotty. The gist is that: papers without reviews that agree on the result's merit with sufficient weight will not be treated as equals to strong scientific results.
  4. Healthy incentives for reviewers are another aspect to both increasing review quality and boosting transparency. If reviewers can receive academic merit for well-reasoned and insightful reviews, they will devote extra effort and welcome extra exposure of their reviews' writing. Our friends at Publons are working exactly on this. It's a great start.

  5. Move away from quota-based publishing. With the exception of the rare cases where the work is clearly unscientific or the methods are completely unreliable, the review process should always aim to improve the submitted work to a fully acceptable manuscript. There are already many venues without deadlines where submission can arrive at any point, which is great. Additionally, we should stop treating submissions that are off-topic for a venue as "rejects" but instead think of them as "redirects" and automate the resubmission flow into a sibling venue on a similar topic.
In short, we need to build on the established classic peer-review process, combine it with the best practices of large scale question-answer services with karma such as Quora, StackOverflow and Reddit, and adapt it to a digital reality that isn't constrained by the price and scarcity of ink and paper.

Note: At Authorea, we currently offer neither peer-review, nor submission capabilities, and think of ourselves as first a writing platform and second a pre-print server for science. But we have thought long and hard on the subject of peer-review, and may make steps in that direction in the future. Already today, we offer one of the most advanced automations for resubmission via our Export feature, which allows restyling a manuscript to a journal's requirements in a matter of seconds.
Thanks for your question.

Hi Alberto,
I think what you're doing is great; I would love to see a shift in how academic literature is shared and I think we are long overdue for the modernization that has already been readily accepted in other fields.

Unfortunately, I find that these academic journals all appear to suffer from low impact factors currently. What would you say to a budding scientist--who's career depends upon publishing in "high quality" journals--that would encourage them to publish in your journal?

Co-Founder | Authorea

Hi Jenna- I understand your concern: no matter how open and innovative you may want to be, at the end of the day, as a scientist, you are required to publish in "high quality" journals, because that is the only way to get tenure, grants, and recognition in your community. I think that in the next few years we will see new metrics and methods to assess a scientist' contribution (metrics that go way beyond a journal's impact factor and/or number of citations). But, while we wait for that to happen, what can you do TODAY? Two things come to mind:

(1) deposit a pre-print or post-print of your article in an institutional, disciplinary (or any other) repository that is indexed by scholarly search engines. A pre-print is the version immediately prior to what is published in a journal (and post-print immediately after). Even if you lose your copyright on the published version, the pre-print and post-print versions are YOURS. By depositing an open version of your work, you are giving the entire world open access to your work (and you and your work also become more visible!). If you write an article on Authorea, all you have to do is make it public and it will be a pre-print!

(2) if you have datasets and code associated with your work, publish them with the paper. Publishing the data and code behind your papers/images make your work more likely to be reproduced (and again, they make you and your work more visible!). Most journals today do not allow you to deposit data and code with your paper. Putting a link to a dataset in your paper is not enough because those links die with time. There are tools like Figshare, Zenodo and Dataverse that allow you to deposit data/code and get a DOI for it. In Authorea, you can take even a step further and include the data and code inside your paper. My dream is that the paper of the future will make data and code first class citizens.

[Someone else is editing this]

You are editing this file