Phil Marshall added Appendix.tex  almost 11 years ago

Commit id: f8ce845ae3a8959e38bcf0f1ed1e425f9890b988

deletions | additions      

         

\appendix     \section{Cosmographic Accuracy}     In this appendix we show how time delay precision can be approximately related to precision in cosmological parameters.   We do so by considering the emulation of a joint inference of $H_0$ given a sample of $N$ observed strong lenses, each providing (for simplicity) a single measured time delay $\Delta t_k$. This $k^{\rm th}$ measurement is encoded in a contribution to the joint likelihood, which when written as a function of all the independently-obtained data $\mathbf{\Delta t}$ is the probability distribution   \begin{equation}   {\rm Pr}(\mathbf{\Delta t}|H_0) = \prod_{k=1}^N {\rm Pr}(\mathbf{\Delta t_k}|H_0).   \label{eq:prodpdf}   \end{equation}   If we knew that the uncertainties on the measured time delays were normally distributed, we could write the (unnormalised) PDF for each datum as   \begin{equation}   {\rm Pr}(\Delta t_k|H_0) = \exp \left[ -\frac{(\Delta t_k - \alpha_k / H_0)}{2(\sigma_k^2 + \sigma_0^2)} \right].   \label{eq:gaussian}   \end{equation}   Here, we have used the general relation that the predicted time delay is inversely proportional to the Hubble constant. Indeed, for a simulated lens whose true time delay $\Delta t_k^*$ is known, we can see that $\alpha_k$ must be equal to the product $(\Delta t_k^* H_0^*)$, where $H_0^*$ is the true value of the Hubble constant (used in the simulation). $H_0$ is the parameter being inferred: how different it is from the true value is of great interest to us. The denominator of the exponent contains two terms, that express the combined uncertainty due to the time delay estimation, $\sigma_k$, but also the uncertainty in the lens model $\sigma_0$ that would have been used to predict the time delay.     In practice, the probability for the measured time delay given the light curve data will not be Gaussian. However, for simplicity we can still use Equation~\ref{eq:gaussian} as an approximation, by asking for measurements of time delays to be reported as $\Delta t_k \pm \sigma_k$, and then interpreting these two numbers as above.     We can now roughly estimate the available precision on $H_0$ from the sample as $P \approx \langle P_k \rangle / \sqrt{N}$: if P is to reach 0.2\%, from a sample of 1000 lenses, we require an approximate average precision per lens of $\langle P_k \rangle \approx 6.3\%$. In turn, this implies that we need to be able to measure individual time delays to 3.8\% precision, or better, on average (in order to stay under 6.3\% when combined in quadrature with the 5\% mass model uncertainty).     Returning to the emulated analysis, we might imagine evaluating the product of PDFs in Equation~\ref{eq:prodpdf}, and plotting the resulting likelihood for $H_0$. This distribution will have some median $\hat{H_0}$ and 68\% confidence interval $\sigma_{H_0}$. From these values we could define the precision as   \begin{equation}   P = \frac{\sigma_{H_0}}{H_0^*} \times 100\%   \end{equation}   and the bias~$B$ as   \begin{equation}   B = \frac{\left( \hat{H_0} - H_0^* \right)}{H_0^*}.   \end{equation}   Values of $P$ and $B$ could be computed for any contributed likelihood function, and used to compare the associated measurement algorithms.   Focusing on the likelihood for $H_0$ would allow us to do two things: first, derive well-defined targets for the analysis teams to aim for, and second, weight the different lens systems in approximately the right way given our focus on cosmology. However, by working with the analogous expressions for the time delays themselves ... {\bf PJM: Tommaso to complete this section or remove it?}