Light Curves and Simulated Data

\label{sec:light_curves}

The intensity as a function of time for a variable source is referred to as its light curve. For lensed sources, the light curves of images follow the intrinsic variability of the quasar source, but with individual time delays that are different for each image. Only the relative time delays between the images are measurable, since the unlensed quasar itself cannot be observed. Of course we do not actually measure a light curve, but rather discrete values of the intensity at different epochs. This sampling of the light curves, as well as the noise in the photometric measurement and external effects causing additional variations in the intensity, are all complications in estimating the time delays.

Basics

\label{sec:basics}

The history of the measurement of time delays in lens systems can be broadly split into three phases. In the first, the majority of the efforts were aimed at the first known lens system, Q0957+561 \citep{WalshEtal1979}. This system presented a particularly difficult situation for time-delay measurements, because the variability was smooth and relatively modest in amplitude, and because the time delay was long. This latter point meant that the annual season gaps when the source could not be observed at optical wavelengths complicated the analysis much more than they would have for systems with time delays of significantly less than one year. The value of the time delay remained controversial, with adherents of the “long” and “short” delays \citep[e.g.,][]{PressEtal1992a,PressEtal1992b,PeltEtal1996} in disagreement until a sharp event in the light curves resolved the issue \citep{KundicEtal1995,KundicEtal1997}. The second phase of time delay measurements began in the mid-1990s, by which time tens of lens systems were known, and small-scale but dedicated lens monitoring programs were conducted. With the larger number of systems, there were a number of lenses for which the time delays were more conducive to a focused monitoring program, i.e., systems with time delays on the order of 10–150 days. Furthermore, advances in image processing techniques, notably the image deconvolution method developed by \citet{MagainEtal1998}, allowed optical monitoring of systems in which the image separation was small compared to the seeing. The monitoring programs, conducted at both optical and radio wavelengths, produced robust time delay measurements \citep[e.g.,][]{LovellEtal1998,BiggsEtal1999,FassnachtEtal1999,FassnachtEtal2002,BurudEtal2002a,BurudEtal2002b}, even using fairly simple analysis methods such as cross-correlation, maximum likelihood, or the “dispersion” method introduced by \citet{PeltEtal1994,PeltEtal1996}. The third and current phase, which began roughly in the mid-2000s, has involved large and systematic monitoring programs that have taken advantage of the increasing amount of time available on 1–2 m class telescopes. Examples include the SMARTS program \citep[e.g.,][]{KochanekEtal2006}, the Liverpool Telescope robotic monitoring program \citep[e.g.,][]{GoicoecheaEtal2008}, and the COSMOGRAIL program \citep[e.g.,][]{EigenbrodEtal2005}. These programs have shown that it is possible to take an industrial-scale approach to lens monitoring and produce good time delays \citep[e.g.,][]{TewesEtal2013a,EulaersEtal2013,RathnaKumarEtal2013}. The next phase, which has already begun, will be lens monitoring from new large-scale surveys that include time-domain information such as the Dark Energy Survey, PanSTARRS, and LSST.

Measured time delays constrain the time delay distance \[D_{\Delta t} = \frac{d_l d_s}{d_{ls}}\] where \(d_l\) is the angular diameter distance between observer and lens, \(d_s\) between observer and source, and \(d_{ls}\) between lens and source. Note that because of spacetime curvature the lens-source distance is not the difference between the other two. The time delay distance will be inversely proportional to the Hubble constant \(H_0\), the current cosmic expansion rate that sets the scale of the universe, but the distances also involve the matter and dark energy densities, and the dark energy equation of state.

The accuracy of \(D_{\Delta t}\) derived from the data for a given lens system is dependent on both the mass model for that system as well as the precision measurement of the lensing observables. Typically, positions and fluxes (and occasionally shapes if the source is resolved) of the images can be obtained to sub-percent accuracy \citep[see e.g.,][]{COSMOGRAIL}, but time delay accuracies are usually on the order of days, or a few percent, for typical systems \citep[see e.g.,][]{TewesEtal2013b}. Measuring time delays requires continuous monitoring over months to years. However, wide area surveys only return to a given patch of sky every few nights, sources are only visible from a given point on the Earth for certain months of the year, and bad weather can lead to data gaps as well.

Simulating light curves

\label{sec:simulate}

Simulating the LSST observation of a multiply-imaged quasar involves four conceptual steps:

  1. The quasar’s intrinsic light curve in a given optical band is generated at the accretion disk of the black hole in an active galactic nucleus (AGN);

  2. The foreground lens galaxy causes multiple imaging, leading to 2 or 4 lensed light curves that are offset from the intrinsic light curve (and each other) in both amplitude (due to magnification), and time.

  3. Time-dependent amplitude fluctuations due to microlensing by stars in the lens galaxy are generated on top of (and independently for) each light curve.

  4. The delayed and microlensed light curves are sparsely, but simultaneously, “sampled” at the observational epochs, with the measurements adding noise.

In the next sections we describe the simulation of each of these steps in some detail during the generation of the challenge mock LSST light curve catalog.