In September 2018, Wiley began a collaborative pilot initiative with Publons and ScholarOne (part of Clarivate, Web of Science) to open up the peer review process by offering transparent peer review as an opt-out for authors on submission to a journal. If a journal article is published, then the peer reviewers’ reports, authors’ responses, and editors’ decisions accompany the published article. Reviewers also have the option to disclose their names alongside their reports but this is not mandatory. We wanted to learn how the initiative was working and understand the effect of introducing transparent peer review in terms of journal turnaround times and willingness of reviewers to agree to review. We present data from 27 journals across a range of subject disciplines that have participated in the pilot for at least six months. We compared our findings with 29 comparable 'control' journals that did not introduce transparent peer review for the same time period. By considering a total of 74160 submissions we measured changes both pre- and post- the introduction of transparent peer review. We found that on average 86% of authors remained opted-in to a transparent peer review process. The majority of reviewers were willing to publish the content of their reports, however only 15% of reviewers agreed to sign their reports. Transparent peer review did not have an impact on journal turnaround times or the number of revisions authors made. However, editors had to invite more peer reviewers in order to secure a sufficient number of reviewers to agree to review an article, increasing editorial effort. Overall, these results suggest that transparent peer review is feasible across journals in different subject disciplines and is not detrimental to editorial decision times. We think that the benefits of introducing transparent peer review, in terms of trust and accountability for the peer review process and recognition for the work of editors and reviewers, outweigh any practical concerns against it.
Aim: We wanted to understand how well journal teams, comprising editors, managing editors, reviewers and publishers, perform across five Essential Areas of peer review according to a self-assessment of their own editorial and peer review processes. We also wanted to identify and share the best practices that journals use and recognise potential obstacles that could be overcome. Methods: Journals used a Self-Assessment tool to assess their peer review processes by answering questions and giving themselves a quantitative score and providing a qualitative explanation for their rating, across the five ‘Essential Areas’ of Integrity, Ethics, Fairness, Usefulness and Timeliness. Wiley colleagues independently rated the journals to distinguish best practices and identify potential obstacles. Results: We examined the responses of 132 journals which completed the Self-Assessment exercise. Journals tended to rate themselves more highly than the study authors did. The greatest variation in rating between journal self-rating (SA-score) and the study authors’ rating (R-score) was in the Essential Area of Usefulness, with the smallest variation in the area of Ethics. We identified a set of best practices that could help improve peer review in each of the Essential Areas.Conclusion: The Self-Assessment encourages journals to reflect on and change their peer review processes and offers practical guidance on how to do this. They benefit from greater awareness of technical solutions that exist to help them in this. The Self-Assessment also highlights how journals can be inconsistent in the way that their processes operate, with one policy in place for authors and a different or no policy in place for reviewers/editors. Rather than be content with the status quo, journals should strive to improve processes in the light of changing community expectations and technological advances.
It is easy to argue that open data is critical to enabling faster and more effective research discovery. In this article we describe the approach we have taken at Wiley to support open data and FAIR (Findable, Accessible, Interoperable and Reusable) data with the implementation of four data policies: “Encourages”, “Expects”, “Mandates” and “Mandates and Peer Reviews Data.” We describe the rationale for these policies and levels of adoption so far. In the coming months we plan to measure and monitor the implementation of these policies via the publication of data availability statements and data citations. With this information, we’ll be able to celebrate adoption of data-sharing practices by the research communities we work with and serve, and we hope to showcase researchers from those communities leading in open research.