this is for holding javascript data
Phil Marshall edited introduction.tex
almost 11 years ago
Commit id: f0de0ae27736093404f11edca8e6f2201bb9b135
deletions | additions
diff --git a/introduction.tex b/introduction.tex
index a1bcfc2..327d3b5 100644
--- a/introduction.tex
+++ b/introduction.tex
...
\section{Introduction}
When a quasar is lensed into multiple images by a foreground galaxy, the observables of the lens images are their positions, magnifications, and the time delays. These observables can be used to directly measure both the structure of the lens galaxy itself (on scales $\geq M_{\odot}$) as well as cosmological parameters. In particular, while it has recently been proposed to use time delays to study massive substructures within lens galaxies
[Keeton \& Moustakas 2009], [REF], they have \emph{already} proven to be a powerful tool for measuring $H_0$ [REFS].
The accuracy of $H_0$ derived from the data for a given lens system is dependent on both the mass model for that system as well as the precision measurement of the lensing observables. Typically, positions and fluxes (and occasionally shapes if the source is resolved) of the images can be obtained to sub-percent accuracy [REFS], however time delay accuracies are usually on the order of days which represents a few percent accuracy for typical systems. This is compounded with the fact that measuring time delays requires continuous monitoring over weeks to years, and, as one might expect intuitively, the measured time delay precision does not significantly exceed the sampling rate (or time between epochs).
With the upcoming \emph{Large Synoptic Survey Telescope} (LSST), we will have the first long baseline multi-epoch observational campaign of $\sim$1000 lensed quasars [REFS] from which time delays can in principle be extracted. However, if we are to use these time delays as a means of doing precision cosmology (and in particular for measuring $H_0$ precisely), two questions must be addressed.
First, to what accuracy can time delays be measured from individual double- or quadruply-imaged systems for which the sampling rate and campaign length are given by LSST? Simple techniques like dispersion minimization {\bf (GGD: Chris, what is this technique officially called and are there good
references???)(CDF: Greg, this is called the ``dispersion technique'', and it was developed by Pelt et al. in two papers. I'll track down the references and type text rather than this conversation)} references???)} or spline interpolation through the sparsely sampled data [REFS?] yield time delays which
are likely to may be
too inaccurate, while more insufficently accurate for a Stage IV dark energy experiment. More complex algorithms such as Gaussian Process modeling [REFS] hold more
promise. Regardless, promise, but also have not been tested at scale. Somewhat independently of the measurement algorithm, it is
presently at present unclear whether the
long baseline of surveys like planned LSST
sampling frequency ($\sim10$
years) can yield very days in a given filter, $\sim 4 days$ on average across all filters, REF science book/paper) will enable sufficiently accurate time
delays delay measurements, despite the
somewhat poor sampling long campaign length ($\sim10$
days). years).
Second, do $\sim1000$
reasonably measured time delays
from LSST plausibly measured using LSST-like data yield competitive constraints on
$H_0$ compared to one or two very well studied systems [REFS].\footnote{There is of course the additional complexity that it not yet clear to what extent $H_0$? While ``follow up''
monitoring observations
will be needed on the LSST lens systems. Certainly, following up all systems to
supplement the
degree that, for example, [REF] did LSST lightcurves may be feasible, at least for
B1608+656 to derive the
most stringent lensing constraints brightest systems, here we conservatively focus on
$H_0$ is clearly not feasible. This is a complex issue and beyond the scope of what the
persent paper.} survey data alone can provide.
Of course, the latter question cannot be answered without the
former, and so the former: our first task is to estimate likely time delay measurement accuracy. The goal of this work is to
present enable that via a ``Time Delay Challenge'' (TDC) to the
community in an effort to assess community. Independent, blind analysis of plausibly realistic LSST-like lightcurves will allow the
present ability accuracy of
current time series analysis algorithms to
measure accurate time delays be assessed, and also allow us to make simple cosmographic forecasts for the anticipated LSST dataset. This work can be seen as a first step towards a full understanding of all systematic uncertainties present in the
face LSST lens dataset, but it could also provide valuable insight into the survey strategy needs of
realistic light curves that will be obtained with LSST. a Stage IV time delay lens cosmography program.
In \S \ref{sec:light_curves} we describe the
mock light curves simulated data that we have generated for the challenge, including some of the broad details of observational and physical effects that may make extracting accurate time delays difficult, without giving away information that will not observationally knowable during or after the LSST survey. In \S \ref{sec:structure} we describe the structure of the challenge, how interested groups can access the mock light curves, and
what the
final ensemble average time delay distance accuracy criterion
for ``success'' that we will
be. {\bf (GGD: what is the criterion for success???)} use to assess their perfomance.