Phil Marshall edited introduction.tex  over 10 years ago

Commit id: 8eff26b2b43a89bc5e765c7c1f9cb87b259f22b9

deletions | additions      

       

The time delays themselves have been proposed as tools to study massive substructures within lens galaxies \citep{KeetonAndMoustakas2009}, and for measuring cosmological parameters, primarily the Hubble constant, $H_0$ \citep[see, e.g.,][for a recent example]{SuyuEtal2013}, a method first proposed by \citet{Refsdal1964}. In the future, we aspire to measure further cosmological parameters, such as the properties of dark energy that accelerated the cosmic expansion, by combining large samples of measured time delay distances \citep[e.g.,][]{Linder2012}. It is clearly of great interest to develop to maturity the powers of time delay lens analysis for probing the dark universe.   New wide area imaging surveys that repeatedly scan the sky to gather time-domain information on variable sources are coming online. This pursuit will reach a new height when the \emph{Large Synoptic Survey Telescope} (LSST) enables the first long baseline multi-epoch observational campaign on $\sim$1000 lensed quasars \citep{LSSTSciBook}. However, to use the measured LSST lightcurves to extract time delays for accurate cosmology will require detailed understanding of how, and how well, time delays can be reconstructed from data with real world properties of noise, gaps, and additional systematic variations. For example, to what accuracy can time delays between the multiple image intensity patterns be measured from individual double- or quadruply-imaged systems for which the sampling rate and campaign length are given by LSST? In order for time delay errors to be small compared to errors from the gravitational potential, we will need the precision of time delays on an individual system to be better than 3\%, and those estimates will need to be robust to systematic error. Simple techniques such as the ``dispersion'' method \citep{PeltEtal1994,PeltEtal1996} or spline interpolation through the sparsely sampled data \citep[e.g.,][]{TewesEtal2013a} yield time delays which {\it may} be insufficiently accurate for a Stage IV dark energy experiment. More complex algorithms such as Gaussian Process modeling \citep[e.g.,][]{TewesEtal2013a,Hojjati+Linder2013} may hold more promise. None of these methods have been tested on large scale data sets.It is at present unclear whether the planned LSST sampling frequency, of $\sim10$ days in a given filter, and $\sim 4$ days on average across all filters \citep{LSSTSciBook,LSSTpaper}, will enable sufficiently accurate time delay measurements, despite the long campaign length ($\sim10$ years). While ``follow up'' monitoring observations to supplement the LSST lightcurves may be feasible, at least for the brightest systems, here we conservatively focus on what the survey data alone can provide. Indeed to maximize the capabilities of panoramic time-domain surveys such as LSST to probe the universe through strong lensing time delays, we must understand the interaction between the time delay estimation algorithms and the data properties.  It is at present unclear whether the baseline ``universal cadence'' LSST sampling frequency, of $\sim10$ days in a given filter, and $\sim 4$ days on average across all filters \citep{LSSTSciBook,LSSTpaper}, will enable sufficiently accurate time delay measurements, despite the long campaign length ($\sim10$ years). While ``follow up'' monitoring observations to supplement the LSST lightcurves may be feasible, at least for the brightest systems, here we conservatively focus on what the survey data alone can provide. Indeed to maximize the capabilities of panoramic time-domain surveys such as LSST to probe the universe through strong lensing time delays, we must understand the interaction between the time delay estimation algorithms and the data properties.   While optimizing the accuracy of LSST time delays is our long term goal, improving the present-day algorithms will benefit the current and planned lens monitoring projects too. We can aim to explore the impact of cadences and campaign lengths spanning the range between today's monitoring campaigns and that expected from a baseline LSST survey, and simultaneously provide input to current projects looking to expand their sample sizes (and hence monitor more cheaply), and also to the LSST project, whose exact survey strategy is not yet fixed.  The goal of this work then is to enablea  realistic estimate estimates  ofthe  feasible time delay measurement accuracy to be made. We will achieve this  via a ``Time Delay Challenge'' (TDC) to the community. Independent, blind analysis of plausibly realistic LSST-like lightcurves will allow the accuracy of current time series analysis algorithms to be assessed, and so enable simple cosmographic forecasts for the anticipated LSST dataset. This work can be seen as a first step towards a full understanding of all systematic uncertainties present in the LSST strong lens dataset, but it could and will  also provide valuable insight into the survey strategy needs of a both Stage III and  Stage IV time delay lens cosmography program. programs.  Blind analysis, where the true value of the quantity being reconstructed is not known by the researchers, is a key tool for robustly testing the analysis procedure without biasing the results by continuing to look for errors until the right answer is reached, and then stopping. This paper is organized as follows. In Section~\ref{sec:light_curves} we describe the simulated data that we have generated for the challenge, including some of the broad details of observational and physical effects that may make extracting accurate time delays difficult, without giving away information that will not be observationally known during or after the LSST survey. Then, in Section~\ref{sec:structure} we describe the structure of the challenge, how interested groups can access the mock light curves, and a minimal set of approximate cosmographic accuracy criteria that we will use to assess their performance.