cdfassnacht edited introduction.tex  almost 11 years ago

Commit id: d45d591e0edcee25348b526a828a6ff805807010

deletions | additions      

       

With the upcoming \emph{Large Synoptic Survey Telescope} (LSST), we will have the first long baseline multi-epoch observational campaign of $\sim$1000 lensed quasars [REFS] from which time delays can in principle be extracted. However, if we are to use these time delays as a means of doing precision cosmology (and in particular for measuring $H_0$ precisely), two questions must be addressed.  First, to what accuracy can time delays be measured from individual double- or quadruply-imaged systems for which the sampling rate and campaign length are given by LSST? Simple techniques like dispersion minimization {\bf (GGD: Chris, what is this technique officially called and are there good references???)} references???)(CDF: Greg, this is called the ``dispersion technique'', and it was developed by Pelt et al. in two papers. I'll track down the references and type text rather than this conversation)}  or spline interpolation through the sparsely sampled data [REFS?] yield time delays which are likely to be too inaccurate, while more complex algorithms such as Gaussian Process modeling [REFS] hold more promise. Regardless, it is presently unclear whether the long baseline of surveys like LSST ($\sim10$ years) can yield very accurate time delays despite the somewhat poor sampling ($\sim10$ days). Second, do $\sim1000$ reasonably measured time delays from LSST yield competitive constraints on $H_0$ compared to one or two very well studied systems [REFS].\footnote{There is of course the additional complexity that it not yet clear to what extent ``follow up'' observations will be needed on the LSST lens systems. Certainly, following up all systems to the degree that, for example, [REF] did for B1608+656 to derive the most stringent lensing constraints on $H_0$ is clearly not feasible. This is a complex issue and beyond the scope of the persent paper.} Of course, the latter question cannot be answered without the former, and so the goal of this work is to present a ``Time Delay Challenge'' (TDC) to the community in an effort to assess the present ability of time series analysis algorithms to measure accurate time delays in the face of realistic light curves that will be obtained with LSST. In \S \ref{sec:light_curves} we describe the mock light curves that we have generated for the challenge, including some of the broad details of observational and physical effects that may make extracting accurate time delays difficult, without giving away information that will not observationally knowable during or after the LSST survey. In \S \ref{sec:structure} we describe the structure of the challenge, how interested groups can access the mock light curves, and what the final criterion for ``success'' will be. {\bf (GGD: what is the criterion for success???)}