this is for holding javascript data
Anisha Keshavan edited This_important_point_was_also__.tex
about 8 years ago
Commit id: 76f91ef7f69a38d0dc278762f7751dbbe47e71cf
deletions | additions
diff --git a/This_important_point_was_also__.tex b/This_important_point_was_also__.tex
index f02a665..32f02dc 100644
--- a/This_important_point_was_also__.tex
+++ b/This_important_point_was_also__.tex
...
This important point was also made by reviewer 2, and we
were definitely not clear about have clarified the overall goal of this
project! I project. We will restate
my response, which our response here. The overall goal of this project was
that we are not to
claiming claim that the method of scanning 12 phantom subjects
travel is in any way was cost
effective! The effective. Rather, the goal was to measure MRI-related biases when systems are not standardized, and then see how
we one can overcome these biases with proper sample sizes, rather than
our a costly calibration
method, method or
ADNI-like harmonization. If sites don't need to harmonize, they can include retrospective data in harmonization (for the
analysis (which is cost-effective). It case of retrospective data). This also allows sites the freedom to upgrade hardware/software or even change sequences during a
study, which study. This might be an incentive for sites to contribute data even if they are given little financial
incentive. We focused too much on the support. The phantom calibration aspect
when actually we should have emphasized has been minimized and our statistical model that accounts for MRI-related
biases, the biases has been emphasized. The measurements of that bias (which were estimated
and validated via
calibration), calibration) are an important part of this study because they validate the scaling assumption of the statistical model and
provide researchers values to plug into the
idea that this is power equation. Our framework provides an alternative method to ADNI harmonization, rather than a strict improvement. The
human phantom calibration
is still important to show showed that
when you apply our assumption of scaled bias to our measurements, the overall absolute agreement between sites improves to the same level of
ADNI ADNI-type harmonization.
I followed up on this in my response Our results are compared to
other harmonization efforts in the
second reviewer manuscript and
to in the following
major concern, which compares our results to other harmonization efforts. response.
It is also true that some of our Our sites have used
sequences that are similar to the vendor provided-T1
sequence, sequences, and \cite{jovicich2013brain} found that high multicenter reliability can be achieved using these standard vendor sequences with very few calibration efforts. However, many of the sites in our consortium are in the middle of longitudinal studies within their sites, and are hesitant to make even very small protocol changes, despite the result from \cite{jovicich2013brain}, which was for the longitudinal processing stream.
Since some multicenter studies rely only on Our statistical model was for a cross-sectional
stream, we think it is still important to evaluate this in terms design, and the evaluation of
between-site variability, scaling bias, even between standard vendor sequences,
is important to optimize sample
sizes. sizes for the cross-sectional case.