this is for holding javascript data
Eric Linder edited introduction.tex
over 10 years ago
Commit id: 32bf03aeb6795e5ece55382fc4e5de080e4ffe6e
deletions | additions
diff --git a/introduction.tex b/introduction.tex
index fec5b41..9d8f42d 100644
--- a/introduction.tex
+++ b/introduction.tex
...
\section{Introduction}
Gravity bends light. The most dramatic manifestation of this occurs with strong lensing, when light rays from a single source take multiple paths to reach the observer, causing the appearance of multiple images of the same source. These images will also be magnified (in size and intensity) due to the bending. When
the source is time varying, for example a quasar
is lensed into multiple -- an extremely luminous active galactic nuclei at cosmological distances, then the images
vary in time but with delays between the patterns of variation due to the differing path lengths take by
a foreground galaxy, the
observables the light and the gravitational potential is passed through. From the observations of the
lens images are their image positions, magnifications, and the time
delays. These observables delays between them we can
be used to directly measure both the
gravity and hence structure of the lens galaxy itself (on scales $\geq M_{\odot}$) as well as
cosmological parameters. In particular, while it has recently been proposed to use time delays to study massive substructures within lens galaxies \citep{KeetonAndMoustakas2009}, they have \emph{already} proven to be a powerful tool for measuring cosmological parameters, most notably the
Hubble constant, $H_0$ \citep[See, e.g.,][for a recent example]{SuyuEtal2013}, using distances between source, lens, and observer. These distances involve the
method first proposed by \citet{Refsdal1964}. cosmic expansion rate, which in turn depends on the energy density of the various components in the universe, phrased collectively as the cosmological parameters.
The history Both these probes of the
measurement of time delays universe are in
lens systems can be broadly split into three phases. In the first, the majority of the efforts were aimed at the first known lens system, Q0957+561 \citep{WalshEtal1979}. This system presented a particularly difficult situation for time-delay measurements, because the variability was smooth and relatively modest in amplitude, and because the time delay was long. This latter point meant that the annual season gaps when the source could not be observed at optical wavelengths complicated the analysis much more than they would have for systems with time delays of significantly less than one year. The
value of the use: using time
delay remained controversial, with adherents of the ``long'' and ``short'' delays
\citep[e.g.,][]{PressEtal1992a,PressEtal1992b,PeltEtal1996} in disagreement until a sharp event in the light curves resolved the issue \citep{KundicEtal1995,KundicEtal1997}. The second phase of time delay measurements began in the mid-1990s, by which time tens of to study massive substructures within lens
systems were known, galaxies \citep{KeetonAndMoustakas2009}, and
small-scale but dedicated lens monitoring programs were conducted. With the larger number of systems, there were as a
number of lenses powerful tool for
which measuring cosmological parameters, for example the
time delays were more conducive to Hubble constant, $H_0$ \citep[see, e.g.,][for a
focused monitoring program, i.e., systems with time delays on the order of 10--150~days. Furthermore, advances in image processing techniques, notably recent example]{SuyuEtal2013}, using the
image deconvolution method
developed first proposed by
\citet{MagainEtal1998}, allowed optical monitoring of systems in which \citet{Refsdal1964}. In the
image separation was small compared future, one hopes to
the seeing. The monitoring programs, conducted at both optical and radio wavelengths, produced robust time delay measurements \citep[e.g.,][]{LovellEtal1998,BiggsEtal1999,FassnachtEtal1999,FassnachtEtal2002,BurudEtal2002a,BurudEtal2002b}, even using fairly simple analysis methods measure cosmological parameters such as
cross-correlation, maximum likelihood, or the
``dispersion'' method introduced by \citet{PeltEtal1994,PeltEtal1996}. The third and current phase, which began roughly in the mid-2000s, has involved large and systematic monitoring programs that have taken advantage of the increasing amount properties of
time available on 1--2~m class telescopes. Examples include the SMARTS program \citep[e.g.,][]{KochanekEtal2006}, the Liverpool Telescope robotic monitoring program \citep[e.g.,][]{GoicoecheaEtal2008}, and the COSMOGRAIL program
\citep[e.g.,][]{EigenbrodEtal2005}. These programs have shown dark energy that
it is possible to take an industrial-scale approach to lens monitoring and produce good time delays \citep[e.g.,][]{TewesEtal2013,EulaersEtal2013,RathnaKumarEtal2013}. The next phase, which has already begun, will be lens monitoring from new large-scale surveys that include time-domain information such as PanSTARRS and LSST. accelerates the cosmic expansion.
The accuracy of $H_0$ derived from the data for a given lens system is dependent on both the mass model for New wide area imaging surveys that
system as well as repeatedly scan the
precision measurement of the lensing observables. Typically, positions and fluxes (and occasionally shapes if the source is resolved) of the images can be obtained sky to
sub-percent accuracy [REFS], but time delay accuracies are usually gather time-domain information on
the order of days, or a few percent, for typical systems. variable sources are coming online. This
is compounded will reach an apex with the
fact that measuring time delays requires continuous monitoring over months to years, and, as one might expect intuitively, the measured time delay precision does not significantly exceed the sampling rate (or time between epochs). [{\bf CDF: I don't think this is necessarily true, even if it is intuitive. For the B1608 VLA analysis, the mean spacing between epochs was $\sim$3 days, but the 68\% uncertainties were on the order of $\pm$1-1.5 day. This may be because the light curves were {\em not} evenly sampled, or it may be due to the fact that 2$\sigma$ is around the epoch spacing.}]
With the upcoming \emph{Large Synoptic Survey Telescope} (LSST),
we will have providing the first long baseline multi-epoch observational campaign on $\sim$1000 lensed quasars
[REFS] from which [REFS]. From this data time delays can
in principle be extracted. However,
if we are to use these time delays
as a means of doing for precision cosmology
(and in particular for measuring $H_0$ precisely), we must quantify the measurement accuracy that we requires accurate understanding of how, and how well, time delays can
expect be reconstructed from data with real world properties of noise, gaps, and additional systematic variations. For example, to
obtain. To what accuracy can time delays
between the multiple image intensity patterns be measured from individual double- or quadruply-imaged systems for which the sampling rate and campaign length are given by LSST?
In order for time delay errors to be small compared to errors from the gravitational potential, we want robust estimation of time delays on an individual system to better than 3\%. Simple techniques such as the ``dispersion'' method \citep{PeltEtal1994,PeltEtal1996} or spline interpolation through the sparsely sampled data [REFS?] yield time delays which may be insufficiently accurate for a Stage IV dark energy experiment. More complex algorithms such as Gaussian Process modeling [REFS] hold more promise, but
also have not been tested
at scale. Somewhat independently of the measurement algorithm, it on large scale data sets. It is at present unclear whether the
planned LSST sampling frequency ($\sim10$ days in a given filter, $\sim 4$ days on average across all filters, REF science book/paper) will enable sufficiently accurate time delay measurements, despite the long campaign length ($\sim10$ years). While ``follow up'' monitoring observations to supplement the LSST lightcurves may be feasible, at least for the brightest systems, here we conservatively focus on what the survey data alone can provide.
How well will we need to measure time delays in a futuristic cosmography program? The
gravitational lens ``time delay distance'' \citep[e.g.][]{Suy++2012} goal of this work is
primarily sensitive to
enable an estimate of the
Hubble constant, $H_0$. While we expect the LSST lens sample feasible time delay measurement accuracy via a ``Time Delay Challenge'' (TDC) to
also provide interesting constraints on other cosmological parameters, notably the
curvature density and dark energy equation community. Independent, blind analysis of
state, as a first approximation we can quantify cosmographic plausibly realistic LSST-like lightcurves will allow the accuracy
via $H_0$. Certainly, an accurate measurement of
$H_0$ would be a highly valuable addition current time series analysis algorithms to
a joint analysis, provided it was precise be assessed, and also allow us to
around 0.2\% \citep[and hence competitive with Stage IV BAO experiments, make simple cosmographic forecasts for
example][]{Weinberg,SuyuEtal2012}. To reach this precision will require the anticipated LSST dataset. This work can be seen as a
sample of some 1000 lenses, with an approximate average precision per lens first step towards a full understanding of
around %6\%$. \citet{Suy++2013a}, find the uncertainty all systematic uncertainties present in
%H_0% due to the
LSST strong lens
model to be approximately 5-6\%, resulting from two approximately equal 4\% contributions, from dataset, but it could also provide valuable insight into the
main survey strategy needs of a Stage IV time delay lens
mass distribution and cosmography program. Blind analysis, where the
weak lensing effects true value of
mass along the
line of sight to quantity being reconstructed is not known by the
lens. In researchers, is a
large sample, we expect the environment uncertainty to be somewhat lower, as we sample lines of sight that are less over-dense than key tool for robustly testing the
systems so far studied \citep{Gre++2013,Col++2013}, so that we might take analysis procedure without biasing the
overall lens model uncertainty results by continuing to
be around 5\% per lens. In turn, this implies that we need to be able to measure individual time delays to around 3\% precision or better, on average (in order to stay under 6\% when combined in quadrature with look for errors until the
5\% mass model uncertainty). This right answer is
the challenge we face. reached, and then stopping.
The goal of this work This paper is
to enable an estimate of organized as follows. In \S \ref{sec:light_curves} we describe the
feasible time delay measurement accuracy via a ``Time Delay Challenge'' (TDC) to simulated data that we have generated for the
community. Independent, blind analysis challenge, including some of
plausibly realistic LSST-like lightcurves will allow the
accuracy broad details of
current time series analysis algorithms to be assessed, observational and
also allow us to physical effects that may make
simple cosmographic forecasts for the anticipated LSST dataset. This work can extracting accurate time delays difficult, without giving away information that will not be
seen as a first step towards a full understanding of all systematic uncertainties present in observationally known during or after the LSST
lens dataset, but it could also provide valuable insight into survey. Then, in \S \ref{sec:structure} we describe the
survey strategy needs structure of
a Stage IV time delay lens cosmography program. the challenge, how interested groups can access the mock light curves, and the approximate cosmographic accuracy criteria that we will use to assess their performance.
This paper is organized as follows. In \S \ref{sec:light_curves} we describe the simulated data that we have generated for the challenge, including some of the broad details of observational and physical effects that may make extracting accurate time delays difficult, without giving away information that will not observationally knowable during or after the LSST survey. Then, in \S \ref{sec:structure} we describe the structure of the challenge, how interested groups can access the mock light curves, and the approximate cosmographic accuracy criteria that we will use to assess their performance.