Greg Dobler edited introduction.tex  almost 11 years ago

Commit id: e291df13a2b04a815d8f9322bddba8fa0007afa8

deletions | additions      

       

With the upcoming \emph{Large Synoptic Survey Telescope} (LSST), we will have the first long baseline multi-epoch observational campaign of $\sim$1000 lensed quasars [REFS] from which time delays can in principle be extracted. However, if we are to use these time delays as a means of doing precision cosmology (and in particular for measuring $H_0$ precisely), two questions must be addressed.  First, to what accuracy can time delays be measured from individual double- or quadruply-imaged systems? systems for which the sampling rate and campaign length are given by LSST?  Simple models techniques  like dispersion minimization {\bf (ggd: (GGD:  Chris, what is this technique officially called and are there good references???)} or spline interpolation through the sparsely sampled data [REFS?] yield time delays which are likely to be too inaccurate, while more complex algorithms such as Gaussian Process modeling [REFS] hold more promise. Regardless, it is presently unclear whether the long baseline of surveys like LSST ($\sim10$ years) can yield very accurate time delays despite the somewhat poor sampling ($\sim10$ days). Second, do $\sim1000$ reasonably measured time delays from LSST yield competitive constraints on $H_0$ compared to one or two very well studied systems [REFS].\footnote{There is of course the additional complexity that it not yet clear to what extent ``follow up'' observations will be needed on the LSST lens systems. Certainly, following up all systems to the degree that, for example, [REF] did for B1608+656 to derive the most stringent lensing constraints on $H_0$ is clearly not feasible. This is a complex issue and beyond the scope of the persent paper.}  . what are some of the issues with measuring time delays to the required precision?