Anisha Keshavan edited This_important_point_was_also__.tex  over 8 years ago

Commit id: 28404509d91c48cf60d247966c77ebcf69daec37

deletions | additions      

       

This important point was also made by reviewer 2, and I was definitely not clear about the overall goal of this project! I will restate my response, which was that we are not to claiming that the method of scanning 12 phantom subjects travel is in any way cost effective! The goal was to measure MRI-related biases when systems are not standardized, and then see how we can overcome these biases with proper sample sizes, rather than our costly calibration method, or ADNI harmonization. If sites don't need to harmonize, they can include retrospective data in the analysis (which is cost-effective). It also allows sites the freedom to upgrade hardware/software or even change sequences during a study, which might be an incentive for sites to contribute data even if they are given little financial incentive, because it requires very little effort on their part! I focused too much on the phantom calibration aspect when actually I should have emphasized our statistical model that accounts for MRI-related biases, the measurements of that bias (which were estimated and validated via calibration), and the idea that this is an alternative method to ADNI harmonization, rather than a strict improvement. The phantom calibration is still important to show that when you apply our assumption of scaled bias to our measurements, the overall absolute agreement between sites improves to the same level of ADNI harmonization. I followed up on this in my response to the second reviewer and to the following major concern, which compares our results to other harmonization efforts.   It is also true that some of our sites have used the vendor provided-T1 sequence, and \cite{jovicich2013brain} found that high multicenter reliability can be achieved using these standard vendor sequences. sequences with very few calibration efforts.  However, many of the sites in our consortium are  in the paper, this middle of longitudinal studies within their sites, and are hesitant to make even very small protocol changes. In addition the  result from \cite{jovicich2013brain}  was for the longitudinal processing stream, which they found to be much more reliable overall compared to the cross-sectional stream. Since some multicenter studes rely only on a cross-sectional stream, we think it is still important to evaluate this in terms of between-site variability, even between standard vendor sequences, to optimize sample sizes.