cdfassnacht edited introduction.tex  over 10 years ago

Commit id: fe922bb9d0d8f91506ced5aa2ce0d8f931128bdf

deletions | additions      

       

When a quasar is lensed into multiple images by a foreground galaxy, the observables of the lens images are their positions, magnifications, and the time delays. These observables can be used to directly measure both the structure of the lens galaxy itself (on scales $\geq M_{\odot}$) as well as cosmological parameters. In particular, while it has recently been proposed to use time delays to study massive substructures within lens galaxies \citep{KeetonAndMoustakas2009}, they have \emph{already} proven to be a powerful tool for measuring cosmological parameters, most notably the Hubble constant, $H_0$ \citep[See, e.g.,][for a recent example]{SuyuEtal2013}, using the method first proposed by \citet{Refsdal1964}.  The history of the measurement of time delays in lens systems can be broadly split into three phases. In the first, the majority of the efforts were aimed at the first known lens system, Q0957+561 \citep{WalshEtal1979}. This system presented a particularly difficult situation for time-delay measurements, because the variability was smooth and relatively modest in amplitude, and because the time delay was long. This latter point meant that the annual season gaps when the source could not be observed at optical wavelengths complicated the analysis much more than they would have for systems with time delays of significantly less than one year. The  value of the time delay remained controversial, with adherents of the ``long'' and ``short'' delays \citep[e.g.,][]{PressEtal19xx;PeltEtal1996} \citep[e.g.,][]{PressEtal1992a,PressEtal1992b,PeltEtal1996}  in disagreement until a sharp event in the light curves resolved the issue \citep{Kundic1995,Kundic1997}. The second phase of time delay measurements began in the mid-1990s, by which time tens of lens systems were known, and small-scale but dedicated lens monitoring programs were conducted. With the larger number of systems, there were a number of lenses for which the time delays were more conducive to a focused monitoring program, i.e., systems with time delays on the order of 10--150~days. Furthermore, advances in image processing techniques, notably the image deconvolution method developed by \citet{MCS}, allowed optical monitoring of systems in which the image separation was small compared to the seeing. The monitoring programs, conducted at both optical and radio wavelengths, produced robust time delay measurements \citep[e.g.,][]{Biggs1997,Lovell1997,Fassnacht1998,Fassnacht2002,Burud2002a,Burud2002b}, even using fairly simple analysis methods such as cross-correlation, maximum likelihood, or the ``dispersion'' method introduced by \citet{Pelt1994,Pelt1996}. The third and current phase, which began roughly in the mid-2000s, has involved large and systematic monitoring programs that have taken advantage of the increasing amount of time available on 1--2~m class telescopes. Examples include the SMARTS program \citep[e.g.,][]{Kochanek2006}, the Liverpool Telescope robotic monitoring program \citep[e.g.,][]{Goicoechea2008}, and the COSMOGRAIL program \citep[e.g.,][]{Eigenbrod2005}. These programs have shown that it is possible to take an industrial-scale approach to lens monitoring and produce good time delays \citep[e.g.,][]{Tewes2013,Eulaers2013,RathnaKumar2013}. The next phase, which has already begun, will be lens monitoring from new large-scale surveys that include time-domain information such as PanSTARRS and LSST.  The accuracy of $H_0$ derived from the data for a given lens system is dependent on both the mass model for that system as well as the precision measurement of the lensing observables. Typically, positions and fluxes (and occasionally shapes if the source is resolved) of the images can be obtained to sub-percent accuracy [REFS], but time delay accuracies are usually on the order of days, or a few percent, for typical systems. This is compounded with the fact that measuring time delays requires continuous monitoring over months to years, and, as one might expect intuitively, the measured time delay precision does not significantly exceed the sampling rate (or time between epochs). [{\bf CDF: I don't think this is necessarily true, even if it is intuitive. For the B1608 VLA analysis, the mean spacing between epochs was $\sim$3 days, but the 68\% uncertainties were on the order of $\pm$1-1.5 day. This may be because the light curves were {\em not} evenly sampled, or it may be due to the fact that 2$\sigma$ is around the epoch spacing.}] 

With the upcoming \emph{Large Synoptic Survey Telescope} (LSST), we will have the first long baseline multi-epoch observational campaign on $\sim$1000 lensed quasars [REFS] from which time delays can in principle be extracted. However, if we are to use these time delays as a means of doing precision cosmology (and in particular for measuring $H_0$ precisely), we must quantify the measurement accuracy that we can expect to obtain. To what accuracy can time delays be measured from individual double- or quadruply-imaged systems for which the sampling rate and campaign length are given by LSST? Simple techniques such as the ``dispersion'' method \citep{Pelt1994,Pelt1996} or spline interpolation through the sparsely sampled data [REFS?] yield time delays which may be insufficiently accurate for a Stage IV dark energy experiment. More complex algorithms such as Gaussian Process modeling [REFS] hold more promise, but also have not been tested at scale. Somewhat independently of the measurement algorithm, it is at present unclear whether the   planned LSST sampling frequency ($\sim10$ days in a given filter, $\sim 4$ days on average across all filters, REF science book/paper) will enable sufficiently accurate time delay measurements, despite the long campaign length ($\sim10$ years). While ``follow up'' monitoring observations to supplement the LSST lightcurves may be feasible, at least for the brightest systems, here we conservatively focus on what the survey data alone can provide.   How well will we need to measure time delays in a futuristic cosmography program? The gravitational lens ``time delay distance'' \citep[e.g.][]{Suy++2012} is primarily sensitive to the Hubble constant, $H_0$. While we expect the LSST lens sample to also provide interesting constraints on other cosmological parameters, notably the curvature density and dark energy equation of state, as a first approximation we can quantify cosmographic accuracy via $H_0$. Certainly, an accurate measurement of $H_0$ would be a highly valuable addition to a joint analysis, provided it was precise to around 0.2\% \citep[and hence competitive with Stage IV BAO experiments, for example][]{Weinberg,SuyuWhitePaper}. example][]{Weinberg,SuyuEtal2012}.  To reach this precision will require a sample of some 1000 lenses, with an approximate average precision per lens of around %6\%$. \citet{Suy++2013a}, find the uncertainty in %H_0% due to the lens model to be approximately 5-6\%, resulting from two approximately equal 4\% contributions, from the main lens mass distribution and the weak lensing effects of mass along the line of sight to the lens. In a large sample, we expect the environment uncertainty to be somewhat lower, as we sample lines of sight that are less over-dense than the systems so far studied \citep{Gre++2013,Col++2013}, so that we might take the overall lens model uncertainty to be around 5\% per lens. In turn, this implies that we need to be able to measure individual time delays to around 3\% precision or better, on average (in order to stay under 6\% when combined in quadrature with the 5\% mass model uncertainty). This is the challenge we face. The goal of this work is to enable an estimate of the feasible time delay measurement accuracy via a ``Time Delay Challenge'' (TDC) to the community. Independent, blind analysis of plausibly realistic LSST-like lightcurves will allow the accuracy of current time series analysis algorithms to be assessed, and also allow us to make simple cosmographic forecasts for the anticipated LSST dataset. This work can be seen as a first step towards a full understanding of all systematic uncertainties present in the LSST lens dataset, but it could also provide valuable insight into the survey strategy needs of a Stage IV time delay lens cosmography program.