this is for holding javascript data
Greg Dobler edited introduction.tex
almost 11 years ago
Commit id: d1a2109ec6115a7f576d3511a9e8e5fab9fa4932
deletions | additions
diff --git a/introduction.tex b/introduction.tex
index ea27361..94bec15 100644
--- a/introduction.tex
+++ b/introduction.tex
...
When a quasar is lensed into multiple images by a foreground galaxy, the observables of the lens images are their positions, magnifications, and the time delays. These observables can be used to directly measure both the structure of the lens galaxy itself (on scales $\geq M_{\odot}$) as well as cosmological parameters. In particular, while it has recently been proposed to use time delays to study massive substructures within lens galaxies [REF], they have \emph{already} proven to be a powerful tool for measuring $H_0$ [REFS].
The accuracy of $H_0$ derived from the data for a given lens system is dependent on both the mass model for that system as well as the precision measurement of the lensing observables. Typically, positions and fluxes (and occasionally shapes if the source is resolved) of the images can be obtained to sub-percent accuracy [REFS], however time delay accuracies are usually on the order of days which represents a few percent accuracy for typical systems. This is compounded with the fact that
measuring time delays
require requires continuous monitoring
on time scales of over weeks to
years in order to measure to that accuracy, and (as years, and, as one might expect
intuitively) intuitively, the measured time delay precision does not significantly exceed the sampling rate (or time between epochs).
With the upcoming \emph{Large Synoptic Survey Telescope} (LSST), we will have the first long baseline multi-epoch observational campaign of $\sim$1000 lensed quasars amenable for
. what are some of the issues with measuring time delays to the required precision?