Phil Marshall edited Challenge Structure.tex  almost 11 years ago

Commit id: 2c6021c4ae56875f4299fa65d3f66a252a6d8ea9

deletions | additions      

       

The overall goal of TDC0 and TDC1 is to carry out a blind test of current state of the art time-delay estimation algorithms in order to quantify the available accuracy. Criteria for success depend on the time-horizon. At present, time-delay cosmology is limited by the number of lenses with measured light curves and by the modeling uncertainties which are of order 5\% per system. Furthermore, distance measurements are currently in the range of accuracy of 3\%. Therefore, any method that can provide time-delays with realistic uncertainties ($\chi^2<1.5fN$) for the majority ($f>0.75$) of light curves with accuracy $A$ and precision $P$ better than 3\% can be considered a viable method.  In the longer run, with LSST in mind, a desirable goal is to achieve $A<0.01$ $P<0.01$ and $\chi^2<1.1 $A<0.002$ {\bf in order not to be systematics limited, while retaining the same average precision $P$ from whence the target 0.2/% cosmographic precision comes from}. For $N=1000$, the 2-sigma goodness of fit requirement becomes $\chi^2 < 1.09  fN$, while keeping $f>0.75$. $f>0.5$. Testing for such extreme accuracy requires a large sample of lenses: TDC1 will contain several thousand simulated systems to enable such tests.  The analysis we wish to emulate therefore is a joint inference of $H_0$ given a sample of $N$ observed strong lenses, each providing (for simplicity) a single measured time delay $\Delta t_k$. This $k^{\rm th}$ measurement is encoded in a contribution to the joint likelihood, which when written as a function of all the independently-obtained data $\mathbf{\Delta t}$ is the probability distribution   \begin{equation}   {\rm Pr}(\mathbf{\Delta t}|H_0) = \prod_{k=1}^N {\rm Pr}(\mathbf{\Delta t_k}|H_0).   \label{eq:prodpdf}   \end{equation}   If we knew that the uncertainties on the measured time delays were normally distributed, we could write the (unnormalised) PDF for each datum as   \begin{equation}   {\rm Pr}(\Delta t_k|H_0) = \exp \left[ -\frac{(\Delta t_k - \alpha_k / H_0)}{2(\sigma_k^2 + \sigma_0^2)} \right].   \label{eq:gaussian}   \end{equation}   Here, we have used the general relation that the predicted time delay is inversely proportional to the Hubble constant. Indeed, for a simulated lens whose true time delay $\Delta t_k^*$ is known, we can see that $\alpha_k$ must be equal to the product $(\Delta t_k^* H_0^*)$, where $H_0^*$ is the true value of the Hubble constant (used in the simulation). $H_0$ is the parameter being inferred: how different it is from the true value is of great interest to us. The denominator of the exponent contains two terms, that express the combined uncertainty due to the time delay estimation, $\sigma_k$, but also the uncertainty in the lens model $\sigma_0$ that would have been used to predict the time delay.     In practice, the probability for the measured time delay given the light curve data will not be Gaussian. However, for simplicity we can still use Equation~\ref{eq:gaussian} as an approximation, by asking for measurements of time delays to be reported as $\Delta t_k \pm \sigma_k$, and then interpreting these two numbers as above.     We can now roughly estimate the available precision on $H_0$ from the sample as $P \approx \langle P_k \rangle / \sqrt{N}$: if P is to reach 0.2\%, from a sample of 1000 lenses, we require an approximate average precision per lens of $\langle P_k \rangle \approx 6.3\%$. In turn, this implies that we need to be able to measure individual time delays to 3.8\% precision, or better, on average (in order to stay under 6.3\% when combined in quadrature with the 5\% mass model uncertainty).     Returning to the emulated analysis, we imagine evaluating the product of PDFs in Equation~\ref{eq:prodpdf}, and plotting the resulting likelihood for $H_0$. This distribution will have some median $\hat{H_0}$ and 68\% confidence interval $\sigma_{H_0}$. From these values we define the precision (as already seen above) as   \begin{equation}   P = \frac{\sigma_{H_0}}{H_0^*} \times 100\%   \end{equation}   and the bias~$B$ as   \begin{equation}   B = \frac{\left( \hat{H_0} - H_0^* \right)}{\sigma_{H_0}}.   \end{equation}   Values of $P$ and $B$ can be computed for any contributed likelihood function, and used to compare the associated measurement algorithms.   Focusing on the likelihood for $H_0$ allows us to do two things: first, derive well-defined targets for the analysis teams to aim for, and second, weight the different lens systems in approximately the right way given our focus on cosmology.