OBJECTIVE CRITERIA (QUALITY)
Quality: Experiments (1–3 scale) SCORE = 1
Figure by figure, do experiments, as performed, have the proper
controls?
Yes, experiments as performed have the proper controls, consistent
with other research in the field.
Are specific analyses performed using methods that are consistent with
answering the specific question?
- Yes, the methods are appropriate to address the research question.
- The authors should provide some rationale to support the choice of
cell lines used in this study. Would other cell lines been appropriate
as well (see cell lines used here:https://doi.org/10.1371/journal.pone.0007870)?
Is there the appropriate technical expertise in the collection and
analysis of data presented?
- Additional clear rationale for some experiments would strengthen the
study. What is the rationale for the choice of protease and for the
use of protease treatment prior to infection? What is the
rationale for the different cell lines used?
- It is unclear why certain modifications in the RBD would result in
reduced spike protein accumulation and impair pseudotype
incorporation. The reader would benefit from additional information in
the Discussion that addresses this issue.
Do analyses use the best-possible (most unambiguous) available methods
quantified via appropriate statistical comparisons?
Statistical analysis was not performed on luciferase experiments.
Are controls or experimental foundations consistent with established
findings in the field? A review that raises concerns regarding
inconsistency with widely reproduced observations should list at least
two examples in the literature of such results. Addressing this question
may occasionally require a supplemental figure that, for example,
re-graphs multi-axis data from the primary figure using established axes
or gating strategies to demonstrate how results in this paper line up
with established understandings. It should not be necessary to defend
exactly why these may be different from established truths, although
doing so may increase the impact of the study and discussion of
discrepancies is an important aspect of scholarship.
- Yes, it is well known at SARS-CoV uses the ACE2 receptor, and those
results were recapitulated here as well as the finding that MERS-CoV
uses the DPP4 receptor.
- There are now also several BioRxiv pre-prints that show, through
different methods, that SARS-CoV-2 uses the ACE2 entry receptor.
Quality: Completeness (1–3 scale) SCORE = 1.5
Does the collection of experiments and associated analysis of data
support the proposed title- and abstract-level conclusions? Typically,
the major (title- or abstract-level) conclusions are expected to be
supported by at least two experimental systems.
- The authors’ conclusions rely heavily on the luciferase reporter
system. Could these findings be validated using an alternative method?
- Much of the Results and Discussion focus on protease treatment and
proteolytic cleavage of the chimeric spike constructs, but the authors
do not show what trypsin treatment does to their chimeric constructs
besides enhancing entry into cells in a receptor-dependent manner.
Perhaps they could have used protease inhibitors to provide further
support for their claim that proteases can be a barrier to viral entry
(cathepsin L, for example, cleaves the SARS spike and cathepsin L
inhibitors are readily available). Alternatively, they could have used
a western blotting approach to demonstrate proteolytic processing of
Spike (trypsin treatment vs. untreated; the chimeric RBD-Spike
constructs are already FLAG-tagged to facilitate western blotting).
Are there experiments or analyses that have not been performed but if
“true” would disprove the conclusion (sometimes considered a fatal
flaw in the study)? In some cases, a reviewer may propose an alternative
conclusion and abstract that is clearly defensible with the experiments
as presented, and one solution to “completeness” here should always be
to temper an abstract or remove a conclusion and to discuss this
alternative in the discussion section.
We don’t see a fatal flaw in the study. Although there are a variety
of techniques to investigate viral entry besides luciferase assays and
pseudotyped particles, we think that it is unlikely that they would
provide conflicting data.
Quality: Reproducibility (1–3 scale) SCORE = 2
Figure by figure, were experiments repeated per a standard of 3 repeats
or 5 mice per cohort, etc.?
- Most figures contain panels with multiple replicates (n=3), although
they appear to be technical replicates rather than biological
replicates. Having biological replicates would strengthen the data.
- Statistics could be performed on experiments with multiple biological
replicates. Pursuing statistical analysis would enhance the strength
of the claims in all figures.
Is there sufficient raw data presented to assess rigor of the analysis?
Yes. Raw luciferase assay data is not typically presented in the
field.
Are methods for experimentation and analysis adequately outlined to
permit reproducibility?
- We struggled to fully understand the methods for infection of target
cells with pseudotyped VSV. Generally, the use of protease before the
infection was unclear and we were unsure how that could reasonably be
expected to increase infectivity (see this paper which describes what
we understand to be the current consensus on proteolytic cleavage in
coronavirushttps://www.pnas.org/content/114/42/11157).
Some literature even suggested that pretreatment with trypsin would
decrease infectivity (ex.
PMID:
19924243). Overall, we suggest that the
“Luciferase-based cell entry assay” method be
clarified to include enough information for it to be reasonably
reproduced, as well as to explain the rationale for protease treatment
before infection and for using trypsin instead of another cellular
protease. If these methods are well-established in the literature,
that literature should be cited here.
- The authors should describe how pseudoparticle titration was performed
and the MOI used for each experiment.
Quality: Scholarship (1–4 scale but generally not the basis for
acceptance or rejection) SCORE = 1
Has the author cited and discussed the merits of the relevant data that
would argue against their conclusion?
Yes.
Has the author cited and/or discussed the important works that are
consistent with their conclusion and that a reader should be especially
familiar when considering the work?
Yes, with the exception of the protease literature.
Specific (helpful) comments on grammar, diction, paper structure, or
data presentation (e.g., change a graph style or color scheme) go in
this section, but scores in this area should not be significant bases
for decisions.
- Generally, the use of colour in the figures was quite helpful
throughout the manuscript. However, in Figure 3A, the colour codes on
the represented spike proteins are not identified. For clarity, we
suggest that the authors identify what their orange and blue boxes
represent.
- In Figure 5, the graphs (panels C and D) are in a different
orientation than all the previous iterations of similar data in
previous figures. We suggest that the authors redraw this graph in a
vertical orientation for consistency. As well, the titles on panels C
and D should be in a consistent format.
- Figure S4 could be moved into the main text to better support the
following claim: “However, the 2019-nCoV RBD contains most of
the contact points with human ACE2 that are found in clade 1 as well
as some amino acid variations that are unique to clade 2 and 3 (figure
s4b). Taken together with our receptor assay results, it may be
possible that 2019-nCoV arose from recombination between clade 1 and
the other clades .” This is a strong (and very interesting) claim,
which the authors provide some evidence for and can be found in other
recent pre-prints on this topic, but they do not mention it again
until the Discussion. This should be discussed in more detail in the
Results.
- Figure S1C shows something similar, that 2019-nCoV/SARS-CoV-2 clusters
separately. Then Fig S4B shows that can be the result of a
recombination. Although the paper’s focus is not 2019-ncov in
particular, due to the outbreak of SARS-CoV-2, we think it’s important
to include this in the main Results section.
MORE SUBJECTIVE CRITERIA (IMPACT)
Impact: Novelty/Fundamental and Broad Interest (1–4 scale) SCORE = 1
A score here should be accompanied by a statement delineating the most
interesting and/or important conceptual finding(s), as they stand right
now with the current scope of the paper. A “1” would be expected to be
understood for the importance by a layperson but would also be of top
interest (have lasting impact) on the field.
- The authors report a new functional viromics screen that is
inexpensive, rapid and accurate, which stands to be the most important
contribution. However, it will also be of broad interest to the
research community that their screen quickly identified the receptor
used by the SARS-CoV-2 virus causing current outbreaks in Wuhan,
China.
- This screen is also valuable because it can be conducted at reduced
biosafety levels (BSL2).
How big of an advance would you consider the findings to be if fully
supported but not extended? It would be appropriate to cite literature
to provide context for evaluating the advance. However, great care must
be taken to avoid exaggerating what is known comparing these findings to
the current dogma (see Box 2). Citations (figure by figure) are
essential here.
- There is consensus that the creation of a new screen to identify
coronavirus receptor binding domain specificity is an important
advance for the field. This new method provides an efficient and
cost-effective way to investigate receptor-binding specificity of any
newly discovered betacoronavirus.
- One potential limitation of this system is that it can be used to
determine which known receptor the coronavirus uses to enter the cells
but cannot be used to identify unknown receptors. This doesn’t
diminish the importance of the system, but this limitation could be
explored briefly in the Discussion.
Impact: Extensibility (1–4 or N/A scale) SCORE = N/A
Has an initial result (e.g., of a paradigm in a cell line) been extended
to be shown (or implicated) to be important in a bigger scheme (e.g., in
animals or in a human cohort)?
This criterion is only valuable as a scoring parameter if it is present,
indicated by the N/A option if it simply doesn’t apply. The extent to
which this is necessary for a result to be considered of value is
important. It should be explicitly discussed by a reviewer why it would
be required. What work (scope and expected time) and/or discussion would
improve this score, and what would this improvement add to the
conclusions of the study? Care should be taken to avoid casually
suggesting experiments of great cost (e.g., “repeat a mouse-based
experiment in humans”) and difficulty that merely confirm but do not
extend (see Bad Behaviors, Box 2).