Phil Marshall edited introduction.tex  almost 11 years ago

Commit id: 25a26e97b0161adf646c9af06f3a993cbdb6ea41

deletions | additions      

       

With the upcoming \emph{Large Synoptic Survey Telescope} (LSST), we will have the first long baseline multi-epoch observational campaign of $\sim$1000 lensed quasars [REFS] from which time delays can in principle be extracted. However, if we are to use these time delays as a means of doing precision cosmology (and in particular for measuring $H_0$ precisely), two questions must be addressed.  First, to what accuracy can time delays be measured from individual double- or quadruply-imaged systems for which the sampling rate and campaign length are given by LSST? Simple techniques like dispersion minimization {\bf (GGD: Chris, what is this technique officially called and are there good references???)} references??? CDF: Greg, this is called the ``dispersion technique'', and it was developed by Pelt et al. in two papers. I'll track down the references and type text rather than this conversation))}  or spline interpolation through the sparsely sampled data [REFS?] yield time delays which may be insufficently accurate for a Stage IV dark energy experiment. More complex algorithms such as Gaussian Process modeling [REFS] hold more promise, but also have not been tested at scale. Somewhat independently of the measurement algorithm, it is at present unclear whether the planned LSST sampling frequency ($\sim10$ days in a given filter, $\sim 4 days$ on average across all filters, REF science book/paper) will enable sufficiently accurate time delay measurements, despite the long campaign length ($\sim10$ years).   Second, do $\sim1000$ time delays plausibly measured using LSST-like data yield competitive constraints on $H_0$? While ``follow up'' monitoring observations to supplement the LSST lightcurves may be feasible, at least for the brightest systems, here we conservatively focus on what the survey data alone can provide.