cdfassnacht edited introduction.tex  almost 11 years ago

Commit id: 0ba7991d0d9ca041f4d84df9e65f6ddb2abd9dd8

deletions | additions      

       

When a quasar is lensed into multiple images by a foreground galaxy, the observables of the lens images are their positions, magnifications, and the time delays. These observables can be used to directly measure both the structure of the lens galaxy itself (on scales $\geq M_{\odot}$) as well as cosmological parameters. In particular, while it has recently been proposed to use time delays to study massive substructures within lens galaxies \citep{K+M2009}, they have \emph{already} proven to be a powerful tool for measuring cosmological parameters, most notably the Hubble constant, $H_0$ \citep[See, e.g.,][for a recent example giving high precision measurements]{Suyu++2013}, using the method first proposed by \citet{Refsdal1964}.  The history of the measurement of time delays in lens systems can be broadly split into three phases. In the first, the majority of the efforts were aimed at the first known lens system, Q0957+561 \citep{Walsh++1979}. This system presented a particularly difficult situation for time-delay measurements, because the variability was smooth and relatively modest in amplitude, and because the time delay was long. This latter point meant that the annual season gaps when the source could not be observed at optical wavelengths complicated the analysis much more than they would have for systems with time delays of significantly less than one year. The  value of the time delay remained controversial, with adherents of the ``long'' and ``short'' delays \citep[e.g.,][]{Press++19??;Pelt++19??} in disagreement until a sharp event in the light curves resolved the issue \citep{Kundic++19??,Kundic++1997}. The second phase of time delay measurements was began  in the mid-1990s, by which time tens of lens systems were known, and small-scale but dedicated lens monitoring programs were conducted. With the larger number of systems, there were a number of lenses for which the time delays were more conducive to a focused monitoring program, i.e., systems with time delays on the order of 10--150~days. The monitoring programs, conducted at both optical and radio wavelengths, produced much more robust time delay measurements, even using fairly simple analysis methods such as cross-correlation, maximum likelihood, or the ``dispersion'' method introduced by \citet{Pelt++??,Pelt++??}.  The accuracy of $H_0$ derived from the data for a given lens system is dependent on both the mass model for that system as well as the precision measurement of the lensing observables. Typically, positions and fluxes (and occasionally shapes if the source is resolved) of the images can be obtained to sub-percent accuracy [REFS], but time delay accuracies are usually on the order of days, or a few percent, for typical systems. This is compounded with the fact that measuring time delays requires continuous monitoring over months to years, and, as one might expect intuitively, the measured time delay precision does not significantly exceed the sampling rate (or time between epochs). [{\bf CDF: I don't think this is necessarily true, even if it is intuitive. For the B1608 VLA analysis, the mean spacing between epochs was $\sim$3 days, but the 68\% uncertainties were on the order of $\pm$1-1.5 day. This may be because the light curves were {\em not} evenly sampled, or it may be due to the fact that 2$\sigma$ is around the epoch spacing.}]