cdfassnacht edited introduction.tex  over 10 years ago

Commit id: e64859924f33a7446b8824b1bcb2b2dd31cbb3e9

deletions | additions      

       

When a quasar is lensed into multiple images by a foreground galaxy, the observables of the lens images are their positions, magnifications, and the time delays. These observables can be used to directly measure both the structure of the lens galaxy itself (on scales $\geq M_{\odot}$) as well as cosmological parameters. In particular, while it has recently been proposed to use time delays to study massive substructures within lens galaxies \citep{KeetonAndMoustakas2009}, they have \emph{already} proven to be a powerful tool for measuring cosmological parameters, most notably the Hubble constant, $H_0$ \citep[See, e.g.,][for a recent example]{SuyuEtal2013}, using the method first proposed by \citet{Refsdal1964}.  The history of the measurement of time delays in lens systems can be broadly split into three phases. In the first, the majority of the efforts were aimed at the first known lens system, Q0957+561 \citep{WalshEtal1979}. This system presented a particularly difficult situation for time-delay measurements, because the variability was smooth and relatively modest in amplitude, and because the time delay was long. This latter point meant that the annual season gaps when the source could not be observed at optical wavelengths complicated the analysis much more than they would have for systems with time delays of significantly less than one year. The  value of the time delay remained controversial, with adherents of the ``long'' and ``short'' delays \citep[e.g.,][]{PressEtal1992a,PressEtal1992b,PeltEtal1996} in disagreement until a sharp event in the light curves resolved the issue \citep{Kundic1995,Kundic1997}. The second phase of time delay measurements began in the mid-1990s, by which time tens of lens systems were known, and small-scale but dedicated lens monitoring programs were conducted. With the larger number of systems, there were a number of lenses for which the time delays were more conducive to a focused monitoring program, i.e., systems with time delays on the order of 10--150~days. Furthermore, advances in image processing techniques, notably the image deconvolution method developed by \citet{MCS}, \citet{MagainEtal1998},  allowed optical monitoring of systems in which the image separation was small compared to the seeing. The monitoring programs, conducted at both optical and radio wavelengths, produced robust time delay measurements \citep[e.g.,][]{Biggs1997,Lovell1997,Fassnacht1998,Fassnacht2002,Burud2002a,Burud2002b}, even using fairly simple analysis methods such as cross-correlation, maximum likelihood, or the ``dispersion'' method introduced by \citet{Pelt1994,Pelt1996}. The third and current phase, which began roughly in the mid-2000s, has involved large and systematic monitoring programs that have taken advantage of the increasing amount of time available on 1--2~m class telescopes. Examples include the SMARTS program \citep[e.g.,][]{Kochanek2006}, the Liverpool Telescope robotic monitoring program \citep[e.g.,][]{Goicoechea2008}, and the COSMOGRAIL program \citep[e.g.,][]{Eigenbrod2005}. \citep[e.g.,][]{EigenbrodEtal2005}.  These programs have shown that it is possible to take an industrial-scale approach to lens monitoring and produce good time delays \citep[e.g.,][]{Tewes2013,Eulaers2013,RathnaKumar2013}. \citep[e.g.,][]{TewesEtal2013,EulaersEtal2013,RathnaKumarEtal2013}.  The next phase, which has already begun, will be lens monitoring from new large-scale surveys that include time-domain information such as PanSTARRS and LSST. The accuracy of $H_0$ derived from the data for a given lens system is dependent on both the mass model for that system as well as the precision measurement of the lensing observables. Typically, positions and fluxes (and occasionally shapes if the source is resolved) of the images can be obtained to sub-percent accuracy [REFS], but time delay accuracies are usually on the order of days, or a few percent, for typical systems. This is compounded with the fact that measuring time delays requires continuous monitoring over months to years, and, as one might expect intuitively, the measured time delay precision does not significantly exceed the sampling rate (or time between epochs). [{\bf CDF: I don't think this is necessarily true, even if it is intuitive. For the B1608 VLA analysis, the mean spacing between epochs was $\sim$3 days, but the 68\% uncertainties were on the order of $\pm$1-1.5 day. This may be because the light curves were {\em not} evenly sampled, or it may be due to the fact that 2$\sigma$ is around the epoch spacing.}]