Table 8: Questions with a high number of non-applicable responses (‘N/A’).
In some instances, we found inconsistencies between how journals framed questions to authors and to peer reviewers. For example, only one question (Q3, on referring reviewers to reporting guidelines) received no R-scores of 3, whereas ten journals received an R-score of 3 for Q36 (on referring authors to reporting guidelines), and of these ten, four had an R-score of 1 for Q3.
In some instances, we found a discrepancy between how journals rated themselves and the practices they undertook. For example, one journal had a SA-score of 1 for Q27 on plagiarism detection software but has iThenticate text similarity software incorporated into its submission system, perhaps indicating a lack of awareness of the technology or a misunderstanding of the question.
Of the 132 journals, 10 journals of the 49 operating double-blind peer review had a SA-score of one for question 15 about how they address bias in peer review. In contrast, 7 journals of the 83 operating single-blind peer review had a SA-score of three.
Calculating average Timeliness scores for all journals and dividing them in quartiles allowed comparisons to be made with actual journal times from submission to first and final decision. We found no correlation between average turnaround times and SA-scores (Table 9). We did, however, find a correlation between average turnaround times and R-scores for Timeliness, with the shortest turnaround times correlating to the highest scores for Timeliness (Table 10).
Quartile
Mean time to first decision (calendar days)
Mean time to final decision (calendar days)
Q1 (lowest)
62.21
126.28
Q2
81.48
155.22
Q3
61.60
105.74
Q4 (highest)
65.02
126.90