this is for holding javascript data
Anisha Keshavan edited I_asked_colleagues_that_work__.tex
over 8 years ago
Commit id: b99e5ae5d2f17d73ea5d1d9709a3db06cc4957eb
deletions | additions
diff --git a/I_asked_colleagues_that_work__.tex b/I_asked_colleagues_that_work__.tex
index c05b9c3..fdaaca8 100644
--- a/I_asked_colleagues_that_work__.tex
+++ b/I_asked_colleagues_that_work__.tex
...
\label{comparetocannon}
\end{table}
The only ROI that does not compare to \cite{cannon2014} is the Thalamus. It is possible that the FIRST algorithm is more reliable at segmenting this structure. I included another table by \cite{jovicich2013brain} where
again, sites were
not strictly harmonized,
but different control subjects were scanned at each site,
but and the authors used the
same freesurfer cross-sectional algorithm that we used. Instead of calculating between-site ICC's, they calculated the average within-site ICCs for each ROI. The following table (which is now included in the manuscript) compares our within-site ICC's pre- and post- calibraiton to \cite{jovicich2013brain} average within-site ICC values:
\begin{table}
\begin{tabular}{ c c c c }
...
Caud & .97 & .97 & $0.909 \pm 0.092$ \\
\bottomrule
\end{tabular}
\caption{Comparing the within-site ICC before and after leave-one-out scaling factor calibration with the cross-sectional freesurfer results of \cite{jovicich2013brain}, where scanners
were standardized, used vendor-provided T1-weighted sequences, and the average within-site ICC is shown. The within-site ICCs of our study fall within the range of \cite{jovicich2013brain}, which shows the that sites in this study are as reliable as those in \cite{jovicich2013brain}.}
\end{table}
Here, we see that the within-site thalamus ICC values fall within the range of \cite{jovicich2013brain}, along with the other ROIs.