Tommaso Treu edited Appendix.tex  almost 11 years ago

Commit id: 9c318d1943edeb08a331c242372cb86a48694dce

deletions | additions      

       

\section{Cosmographic Accuracy}  In this appendix we show how time delay precision can be approximately related to precision in cosmological parameters, thereby justifying the challenge requirements in that context.  We do so by considering the emulation of a joint inference of $H_0$ given a sample of $N$ observed strong lenses, each providing (for simplicity) a single measured time delay $\tilde{\Delta t}_k$. t}_i$.  This $k^{\rm $i^{\rm  th}$ measurement is encoded in a contribution to the joint likelihood, which when written as a function of all the independently-obtained data $\mathbf{\tilde{\Delta t}}$ is the probability distribution \begin{equation}  {\rm Pr}(\mathbf{\tilde{\Delta t}}|H_0) = \prod_{k=1}^N \prod_{i=1}^N  {\rm Pr}(\mathbf{\tilde{\Delta t}_k}|H_0). t}_i}|H_0).  \label{eq:prodpdf}  \end{equation}  If we knew that the uncertainties on the measured time delays were normally distributed, we could write the (unnormalised) PDF for each datum as  \begin{equation}  {\rm Pr}(\tilde{\Delta t}_k|H_0) t}_i|H_0)  = \exp \left[ -\frac{(\tilde{\Delta t}_k t}_i  - \alpha_k \alpha_i  / H_0)}{2(\sigma_k^2 H_0)}{2(\sigma_i^2  + \sigma_0^2)} \right]. \label{eq:gaussian}  \end{equation}  Here, we have used the general relation that the predicted time delay is inversely proportional to the Hubble constant. Indeed, for a simulated lens whose true time delay $\Delta t_k^*$ t_i$  is known, we can see that $\alpha_k$ $\alpha_i$  must be equal to the product $(\Delta t_k^* t_i^  H_0^*)$, where $H_0^*$ is the true value of the Hubble constant (used in the simulation). $H_0$ is the parameter being inferred: how different it is from the true value is of great interest to us. The denominator of the exponent contains two terms, that express the combined uncertainty due to the time delay estimation, $\sigma_k$, $\sigma_i$,  but also the uncertainty in the lens model $\sigma_0$ that would have been used to predict the time delay. In practice, the probability for the measured time delay given the light curve data will not be Gaussian. However, for simplicity we can still use Equation~\ref{eq:gaussian} as an approximation, by asking for measurements of time delays to be reported as $\Delta t_k $\tilde{\Delta t}_i  \pm \sigma_k$, \sigma_i$,  and then interpreting these two numbers as above. We can now roughly estimate the available precision on $H_0$ from the sample as $P \approx \langle P_k P_i  \rangle / \sqrt{N}$: if P is to reach 0.2\%, from a sample of 1000 lenses, we require an approximate average precision per lens of $\langle P_k P_i  \rangle \approx 6.3\%$. In turn, this implies that we need to be able to measure individual time delays to 3.8\% precision, or better, on average (in order to stay under 6.3\% when combined in quadrature with the 5\% mass model uncertainty). Returning to the emulated analysis, we might imagine evaluating the product of PDFs in Equation~\ref{eq:prodpdf}, and plotting the resulting likelihood for $H_0$. This distribution will have some median $\hat{H_0}$ and 68\% confidence interval $\sigma_{H_0}$. From these values we could define the precision as   \begin{equation}   P = \frac{\sigma_{H_0}}{H_0^*} \times 100\%   \end{equation}   and the bias~$B$ as   \begin{equation}   B = \frac{\left( \hat{H_0} - H_0^* \right)}{H_0^*}.   \end{equation}   Values of $P$ and $B$ could be computed for any contributed likelihood function, and used to compare the associated measurement algorithms.   Focusing on the likelihood for $H_0$ would allow us to do two things: first, derive well-defined targets for the analysis teams to aim for, and second, weight the different lens systems in approximately the right way given our focus on cosmology. However, by working with the analogous expressions A similar argument holds  for the time delays themselves ... {\bf PJM: Tommaso to complete this section or remove it?} accuracy A.