cdfassnacht edited introduction.tex  almost 11 years ago

Commit id: 568ed31b8af42ba7f7b06934d7bf8f83d9b471e6

deletions | additions      

       

The accuracy of $H_0$ derived from the data for a given lens system is dependent on both the mass model for that system as well as the precision measurement of the lensing observables. Typically, positions and fluxes (and occasionally shapes if the source is resolved) of the images can be obtained to sub-percent accuracy [REFS], but time delay accuracies are usually on the order of days, or a few percent, for typical systems. This is compounded with the fact that measuring time delays requires continuous monitoring over months to years, and, as one might expect intuitively, the measured time delay precision does not significantly exceed the sampling rate (or time between epochs). [{\bf CDF: I don't think this is necessarily true, even if it is intuitive. For the B1608 VLA analysis, the mean spacing between epochs was $\sim$3 days, but the 68\% uncertainties were on the order of $\pm$1-1.5 day. This may be because the light curves were {\em not} evenly sampled, or it may be due to the fact that 2$\sigma$ is around the epoch spacing.}]  With the upcoming \emph{Large Synoptic Survey Telescope} (LSST), we will have the first long baseline multi-epoch observational campaign on $\sim$1000 lensed quasars [REFS] from which time delays can in principle be extracted. However, if we are to use these time delays as a means of doing precision cosmology (and in particular for measuring $H_0$ precisely), we must quantify the measurement accuracy that we can expect to obtain. To what accuracy can time delays be measured from individual double- or quadruply-imaged systems for which the sampling rate and campaign length are given by LSST? Simple techniques such as the ``dispersion'' method \citep{Pelt1994,Pelt1996} \citep{PeltEtal1994,PeltEtal1996}  or spline interpolation through the sparsely sampled data [REFS?] yield time delays which may be insufficiently accurate for a Stage IV dark energy experiment. More complex algorithms such as Gaussian Process modeling [REFS] hold more promise, but also have not been tested at scale. Somewhat independently of the measurement algorithm, it is at present unclear whether the planned LSST sampling frequency ($\sim10$ days in a given filter, $\sim 4$ days on average across all filters, REF science book/paper) will enable sufficiently accurate time delay measurements, despite the long campaign length ($\sim10$ years). While ``follow up'' monitoring observations to supplement the LSST lightcurves may be feasible, at least for the brightest systems, here we conservatively focus on what the survey data alone can provide.   How well will we need to measure time delays in a futuristic cosmography program? The gravitational lens ``time delay distance'' \citep[e.g.][]{Suy++2012} is primarily sensitive to the Hubble constant, $H_0$. While we expect the LSST lens sample to also provide interesting constraints on other cosmological parameters, notably the curvature density and dark energy equation of state, as a first approximation we can quantify cosmographic accuracy via $H_0$. Certainly, an accurate measurement of $H_0$ would be a highly valuable addition to a joint analysis, provided it was precise to around 0.2\% \citep[and hence competitive with Stage IV BAO experiments, for example][]{Weinberg,SuyuEtal2012}. To reach this precision will require a sample of some 1000 lenses, with an approximate average precision per lens of around %6\%$. \citet{Suy++2013a}, find the uncertainty in %H_0% due to the lens model to be approximately 5-6\%, resulting from two approximately equal 4\% contributions, from the main lens mass distribution and the weak lensing effects of mass along the line of sight to the lens. In a large sample, we expect the environment uncertainty to be somewhat lower, as we sample lines of sight that are less over-dense than the systems so far studied \citep{Gre++2013,Col++2013}, so that we might take the overall lens model uncertainty to be around 5\% per lens. In turn, this implies that we need to be able to measure individual time delays to around 3\% precision or better, on average (in order to stay under 6\% when combined in quadrature with the 5\% mass model uncertainty). This is the challenge we face.