OBJECTIVE CRITERIA (QUALITY)
Quality: Experiments (1–3 scale) SCORE = 1
Figure by figure, do experiments, as performed, have the proper controls?
Yes, experiments as performed have the proper controls, consistent with other research in the field.
Are specific analyses performed using methods that are consistent with answering the specific question?
Is there the appropriate technical expertise in the collection and analysis of data presented?
Do analyses use the best-possible (most unambiguous) available methods quantified via appropriate statistical comparisons?
Statistical analysis was not performed on luciferase experiments.
Are controls or experimental foundations consistent with established findings in the field? A review that raises concerns regarding inconsistency with widely reproduced observations should list at least two examples in the literature of such results. Addressing this question may occasionally require a supplemental figure that, for example, re-graphs multi-axis data from the primary figure using established axes or gating strategies to demonstrate how results in this paper line up with established understandings. It should not be necessary to defend exactly why these may be different from established truths, although doing so may increase the impact of the study and discussion of discrepancies is an important aspect of scholarship.
Quality: Completeness (1–3 scale) SCORE = 1.5
Does the collection of experiments and associated analysis of data support the proposed title- and abstract-level conclusions? Typically, the major (title- or abstract-level) conclusions are expected to be supported by at least two experimental systems.
Are there experiments or analyses that have not been performed but if “true” would disprove the conclusion (sometimes considered a fatal flaw in the study)? In some cases, a reviewer may propose an alternative conclusion and abstract that is clearly defensible with the experiments as presented, and one solution to “completeness” here should always be to temper an abstract or remove a conclusion and to discuss this alternative in the discussion section.
We don’t see a fatal flaw in the study. Although there are a variety of techniques to investigate viral entry besides luciferase assays and pseudotyped particles, we think that it is unlikely that they would provide conflicting data.
Quality: Reproducibility (1–3 scale) SCORE = 2
Figure by figure, were experiments repeated per a standard of 3 repeats or 5 mice per cohort, etc.?
Is there sufficient raw data presented to assess rigor of the analysis?
Yes. Raw luciferase assay data is not typically presented in the field.
Are methods for experimentation and analysis adequately outlined to permit reproducibility?
Quality: Scholarship (1–4 scale but generally not the basis for acceptance or rejection) SCORE = 1
Has the author cited and discussed the merits of the relevant data that would argue against their conclusion?
Yes.
Has the author cited and/or discussed the important works that are consistent with their conclusion and that a reader should be especially familiar when considering the work?
Yes, with the exception of the protease literature.
Specific (helpful) comments on grammar, diction, paper structure, or data presentation (e.g., change a graph style or color scheme) go in this section, but scores in this area should not be significant bases for decisions.
MORE SUBJECTIVE CRITERIA (IMPACT)
Impact: Novelty/Fundamental and Broad Interest (1–4 scale) SCORE = 1
A score here should be accompanied by a statement delineating the most interesting and/or important conceptual finding(s), as they stand right now with the current scope of the paper. A “1” would be expected to be understood for the importance by a layperson but would also be of top interest (have lasting impact) on the field.
How big of an advance would you consider the findings to be if fully supported but not extended? It would be appropriate to cite literature to provide context for evaluating the advance. However, great care must be taken to avoid exaggerating what is known comparing these findings to the current dogma (see Box 2). Citations (figure by figure) are essential here.
Impact: Extensibility (1–4 or N/A scale) SCORE = N/A
Has an initial result (e.g., of a paradigm in a cell line) been extended to be shown (or implicated) to be important in a bigger scheme (e.g., in animals or in a human cohort)? This criterion is only valuable as a scoring parameter if it is present, indicated by the N/A option if it simply doesn’t apply. The extent to which this is necessary for a result to be considered of value is important. It should be explicitly discussed by a reviewer why it would be required. What work (scope and expected time) and/or discussion would improve this score, and what would this improvement add to the conclusions of the study? Care should be taken to avoid casually suggesting experiments of great cost (e.g., “repeat a mouse-based experiment in humans”) and difficulty that merely confirm but do not extend (see Bad Behaviors, Box 2).