this is for holding javascript data
Ryan Boyden edited section_Statistical_Analysis_label_discuss__.tex
over 8 years ago
Commit id: ffbf6186226c89ae2e29e14ff6ac3e584ebfcacd
deletions | additions
diff --git a/section_Statistical_Analysis_label_discuss__.tex b/section_Statistical_Analysis_label_discuss__.tex
index 3ecf61d..1eb492d 100644
--- a/section_Statistical_Analysis_label_discuss__.tex
+++ b/section_Statistical_Analysis_label_discuss__.tex
...
\section{Statistical Analysis}\label{discuss}
RB \& SO (e.g. implications for observations)
We use pseudo-distance metrics to effectively analyze differences between all synthetic observations. For each statistic, we produce a color-plot table denoting distance metric values for all simulation pairs. Section 3 identifies qualitative differences corresponding to stellar feedback; expanding upon this, we now quantify all simulation differences, and determine the sensitivities of the previously mentioned statistics (reword--Cramer wasn't mentioned, as it has no graphical output).
Figure
determining sensitivities towards our paramaters
--three categories
[Here or elsewhere?] Unlike the study carried out in Koch et al.~(2015), our simulation suite does not utilize experimental design to set the simulation parameter values. As discussed in \citet{Yeremi_2014}, comparisons between outputs in one-factor-at-a-time approaches may give a misleading signal since the statistical effects are not fully calibrated. However, we will focus our discussion on those statistics deemed by Koch et al. (2015) to be ``good", i.e., those which exhibit a response to changes in underlying physical parameters rather than to statistical fluctuations in the data.
(still have to bring color plots closer together--also have to redo PDF distance. Uploading incomplete plots now to produce rough outline of discussion)