Aim: We wanted to understand how well journal teams, comprising editors, managing editors, reviewers and publishers, perform across five Essential Areas of peer review according to a self-assessment of their own editorial and peer review processes. We also wanted to identify and share the best practices that journals use and recognise potential obstacles that could be overcome. Methods: Journals used a Self-Assessment tool to assess their peer review processes by answering questions and giving themselves a quantitative score and providing a qualitative explanation for their rating, across the five ‘Essential Areas’ of Integrity, Ethics, Fairness, Usefulness and Timeliness. Wiley colleagues independently rated the journals to distinguish best practices and identify potential obstacles. Results: We examined the responses of 132 journals which completed the Self-Assessment exercise. Journals tended to rate themselves more highly than the study authors did. The greatest variation in rating between journal self-rating (SA-score) and the study authors’ rating (R-score) was in the Essential Area of Usefulness, with the smallest variation in the area of Ethics. We identified a set of best practices that could help improve peer review in each of the Essential Areas.Conclusion: The Self-Assessment encourages journals to reflect on and change their peer review processes and offers practical guidance on how to do this. They benefit from greater awareness of technical solutions that exist to help them in this. The Self-Assessment also highlights how journals can be inconsistent in the way that their processes operate, with one policy in place for authors and a different or no policy in place for reviewers/editors. Rather than be content with the status quo, journals should strive to improve processes in the light of changing community expectations and technological advances.
Background: Previous research has found that researchers rank journal reputation and Impact Factor amongst the key selection criteria when choosing where to submit. We explored the actual effect upon submission numbers of several possible factors.Methods : We retrieved ten years of submission data from over a thousand journals, as well as data on Impact Factor, retractions, and other factors. We performed statistical analysis and identified correlations. We also undertook case study research on the fifty-five most significant submission decreases.Results: We found a statistically significant correlation between changes in Impact Factor and changes in submissions numbers in subsequent years. We also found a statistically significant effect on submission numbers in the year following the publication of a retraction. Our case studies identified other factors, including negative feedback on the peer review process.Discussion: Our findings regarding Impact Factor confirm previous indications about the significance of Impact Factor on submissions. We explain the correlation with retractions through the concept of “peer review reputation”. These results indicate that editors and publishers need to focus on a journal’s peer review practices, as well as a journal’s Impact Factor, if they are to maintain and grow submissions.