Ryan Boyden edited section_Methods_label_methods_subsection__.tex  over 8 years ago

Commit id: 9da2e173644a13cfc24008dfc0ccbbce87b2aa02

deletions | additions      

       

[Ryan with input from Eric - a paragraph?]  We characterize our synthetic observations through (trying to use "use" less) statistical analysis techniques established in the literature. Table ??? enumerates our astrostatistical toolkit. We classify each statistic by its method of analysis: intensity statistics quantify emission distributions, fourier statistics analyze N-dimensional power spectra obtained through spatial integration techniques, and morphology statistics characterize structure and emission properties. Koch et al. (2015) provides a theoretical description of each turbulent statistic. (2015). For each data cube, we compute the intensity moment maps, and implement the statistical analysis techniques. We use a set of pseudo-distance metrics to quantify statistical measure  differences between synthetic observations, observations with (use 'use' less) pseudo-distance metrics,  as proposed in Yeremi et al. (2014) and further developed in Koch et al. (2015). To perform all numerical calculations, we use TurbuStat, a Python package containing detailed algorithms for (turbulent/turbulence) statistics and their respective distance metrics. [Here or elsewhere?] Unlike the study carried out in Koch et al.~(2015), our simulation suite does not utilize experimental design to set the simulation parameter values. As discussed in \citet{Yeremi_2014}, comparisons between outputs in one-factor-at-a-time approaches may give a misleading signal since the statistical effects are not fully calibrated. However, we will focus our discussion on those statistics deemed by Koch et al. (2015) to be ``good", i.e., those which exhibit a response to changes in underlying physical parameters rather than to statistical fluctuations in the data.  Either here or in our new section 4, where we discuss the distance metric tables