this is for holding javascript data
Dan Gifford edited untitled.tex
about 10 years ago
Commit id: 8a9a810c43d0f54a38fbdf3548e8ac6e5e4e037b
deletions | additions
diff --git a/untitled.tex b/untitled.tex
index 1cc00c0..7edb251 100644
--- a/untitled.tex
+++ b/untitled.tex
...
\begin{equation}
\langle \sigma | M \rangle = 1093 \left(\frac{h(z) M}{1e15 M_{\odot}}\right)^{0.34}
\end{equation}
This relationship has a very small
lognormal scatter of $S_{\log(\sigma) | \log(M)}\sim 4\%$. So:
\begin{equation}
P(\sigma | M) = \frac{1}{\sqrt{2\pi}S_{\sigma | M}} e^{\frac{(\log(\sigma) -
\langle \log(\langle \sigma | M
\rangle)^{2}}{2 \rangle))^{2}}{2 S_{\sigma | M}}}
\end{equation}
The second velocity dispersion is the observed velocity dispersion $\hat{\sigma}$. In \citet{Gifford13a}, we define this as the l.o.s velocity dispersion. This has all kinds of ugly things in it including cluster shape effects, cluster environment contamination, substructure, redshift-space interlopers, and non-gaussianity. Not to mention the low number statistics that exist at low mass. Even though this is a messy observable, most are, and this is what we need to predict for a given mass $M$. So here is the generative model for observable:
\begin{equation}
P(\hat{\sigma} | M) = \sum_{\sigma} P(\hat{\sigma} | \sigma) P(\sigma | M)
\end{equation}
Really there are completeness and purity terms in there as well, but lets ignore those for a second. So that is our expected distribution for a given
$M$, but $M$. The other term present is equally important $P(\hat{\sigma} | \sigma)$. This represents the probability of observing a velocity dispersion $\hat{\sigma}$ given $\sigma$. Why is this important? When we observe clusters in the real universe, we don't measure the "Evrard" velocity dispersion. We are randomly drawing from a distribution where the $\sigma$ is the expectation value. This is what \citet{Gifford13a} means by l.o.s effects. So what is that distribution? It's approximately lognormal with $S_{\log(\hat{\sigma}) | \log(\sigma)}\sim 25\%$. So:
\begin{equation}
P(\hat{\sigma} | \sigma) = \frac{1}{\sqrt{2\pi}S_{\hat{\sigma} | \sigma}} e^{\frac{(\log(\hat{\sigma}) - \log(\sigma))^{2}}{2 S_{\hat{\sigma} | \sigma}}}
\end{equation}
But we are binning! That means that we have a distribution of masses in our bin that we must integrate over. What does this integral look like?
\begin{equation}
\langle \hat{\sigma} \rangle = \int_{min(bin)}^{max_bin} dM \frac{d \langle n \rangle}{dM} P(\hat{\sigma} | M)
\end{equation}