WORKING DRAFT authorea.com/1707
Main Data History
Export
Show Index Toggle 0 comments
  •  Quick Edit
  • Strong Lens Time Delay Challenge: I. Experimental Design

    Abstract

    Abstract: The time delays between point-like images in gravitational lens systems can be used to measure cosmological parameters as well as probe the dark matter (sub-)structure within the lens galaxy. The number of lenses with measuring time delays is growing rapidly due to dedicated efforts. In the near future, the upcoming Large Synoptic Survey Telescope (LSST), will monitor \(\sim10^3\) lens systems consisting of a foreground elliptical galaxy producing multiple images of a background quasar. In an effort to assess the present capabilities of the community to accurately measure the time delays in strong gravitational lens systems, and to provide input to dedicated monitoring campaigns and future LSST cosmology feasibility studies, we pose a “Time Delay Challenge” (TDC). The challenge is organized as a set of “ladders,” each containing a group of simulated datasets to be analyzed blindly by participating independent analysis teams. Each rung on a ladder consists of a set of realistic mock observed lensed quasar light curves, with the rungs’ datasets increasing in complexity and realism to incorporate a variety of anticipated physical and experimental effects. The initial challenge described here has two ladders, TDC0 and TDC1. TDC0 has a small number of datasets, and is designed to be used as a practice set by the participating teams as they set up their analysis pipelines. The non mondatory deadline for completion of TDC0 will be December 1 2013. The teams that perform sufficiently well on TDC0 will then be able to participate in the much more demanding TDC1. TDC1 will consists of \(10^3\) lightcurves, a sample designed to provide the statistical power to make meaningful statements about the sub-percent accuracy that will be required to provide competitive Dark Energy constraints in the LSST era. In this paper we describe the simulated datasets in general terms, lay out the structure of the challenge and define a minimal set of metrics that will be used to quantify the goodness-of-fit, efficiency, precision, and accuracy of the algorithms. The results for TDC1 from the participating teams will be presented in a companion paper to be submitted after the closing of TDC1, with all TDC1 participants as co-authors.

    Introduction

    As light travels to us from a distant source, its path is deflected by the gravitational forces of intervening matter. The most dramatic manifestation of this effect occurs in strong lensing, when light rays from a single source can take several paths to reach the observer, causing the appearance of multiple images of the same source. These images will also be magnified in size and thus total brightness (because surface brightness is conserved in gravitational lensing). When the source is time varying, the images are observed to vary with delays between them due to the differing path lengths taken by the the light and the gravitational potential that it passes through. A common example of such a source in lensing is a quasar, an extremely luminous active galactic nucleus at cosmological distance. From the observations of the image positions, magnifications, and the time delays between the multiple images we can measure the mass structure of the lens galaxy itself (on scales \(\geq M_{\odot}\)) as well as a characteristic distance between the source, lens, and observer. This “time delay distance” encodes the cosmic expansion rate, which in turn depends on the energy density of the various components in the universe, phrased collectively as the cosmological parameters.

    The time delays themselves have been proposed as tools to study massive substructures within lens galaxies (Keeton et al., 2009), and for measuring cosmological parameters, primarily the Hubble constant, \(H_0\) (see, e.g., Suyu et al., 2013, for a recent example), a method first proposed by Refsdal (1964). In the future, we aspire to measure further cosmological parameters (e.g., dark energy) by combining large samples of measured time delay distances (citation not found: Linder2012). It is clearly of great interest to develop to maturity the powers of time delay lens analysis for probing the dark universe.

    New wide area imaging surveys that repeatedly scan the sky to gather time-domain information on variable sources are coming online. Dedicated follow-up monitoring campaigns are obtaining tens of time-delays (REF). This pursuit will reach a new height when the Large Synoptic Survey Telescope (LSST) enables the first long baseline multi-epoch observational campaign on \(\sim\)1000 lensed quasars (citation not found: LSSTSciBook). However, to use the measured LSST lightcurves to extract time delays for accurate cosmology will require detailed understanding of how, and how well, time delays can be reconstructed from data with real world properties of noise, gaps, and additional systematic variations. For example, to what accuracy can time delays between the multiple image intensity patterns be measured from individual double- or quadruply-imaged systems for which the sampling rate and campaign length are given by LSST? In order for time delay errors to be small compared to errors from the gravitational potential, we will need the precision of time delays on an individual system to be better than 3%, and those estimates will need to be robust to systematic error. Simple techniques such as the “dispersion” method (Pelt et al., 1994; Pelt et al., 1996) or spline interpolation through the sparsely sampled data (e.g., Tewes et al., 2013) yield time delays which may be insufficiently accurate for a Stage IV dark energy experiment. More complex algorithms such as Gaussian Process modeling \citep[e.g.,][]{TewesEtal2013a,Hojjati+Linder2013} may hold more promise. None of these methods have been tested on large scale data sets.

    At present, it is unclear whether the baseline “universal cadence” LSST sampling frequency of \(\sim10\) days in a given filter and \(\sim 4\) days on average across all filters (citation not found: LSSTSciBook) (citation not found: LSSTpaper) will enable sufficiently accurate time delay measurements, despite the long campaign length (\(\sim10\) years). While “follow up” monitoring observations to supplement the LSST lightcurves may not be feasible at the 1000-lens sample scale, it may be possible to design a survey strategy that optimizes cadence and monitoring at least for some fields. In order to maximize the capability of LSST to probe the universe through strong lensing time delays, we must understand the interaction between the time delay estimation algorithms and the anticipated data properties. While optimizing the accuracy of LSST time delays is our long term objective, improving the present-day algorithms will benefit the current and planned lens monitoring projects as well. Exploring the impact of cadences and campaign lengths spanning the range between today’s monitoring campaigns and that expected from a baseline LSST survey will allow us to simultaneously provide input to current projects looking to expand their sample sizes (and hence monitor more cheaply) as well as the LSST project, whose exact survey strategy is not yet decided.

    The goal of this work then is to enable realistic estimates of feasible time delay measurement accuracy to be made with LSST. We will achieve this via a “Time Delay Challenge” (TDC) to the community. Independent, blind analysis of plausibly realistic LSST-like lightcurves will allow the accuracy of current time series analysis algorithms to be assessed which will lead to simple cosmographic forecasts for the anticipated LSST dataset. This work can be seen as a first step towards a full understanding of all systematic uncertainties present in the LSST strong lens dataset and will also provide valuable insight into the survey strategy needs of both Stage III and Stage IV time delay lens cosmography programs. Blind analysis, where the true value of the quantity being reconstructed is not known by the researchers, is a key tool for robustly testing the analysis procedure without biasing the results by continuing to look for errors until the right answer is reached, and then stopping.

    This paper is organized as follows. In Section \ref{sec:light_curves} we describe the simulated data that we have generated for the challenge, including some of the broad details of observational and physical effects that may make extracting accurate time delays difficult, without giving away information that will not be observationally known during or after the LSST survey. Then, in Section \ref{sec:structure} we describe the structure of the challenge, how interested groups can access the mock light curves, and a minimal set of approximate cosmographic accuracy criteria that we will use to assess their performance.