EranOfek edited Algorithms.tex  over 8 years ago

Commit id: 744c058b37414cb94dbc9ccf60d005e58f17184a

deletions | additions      

       

\section{Algorithms}  \begin{itemize}  \item RealBogus or equivalent  \item star/galaxy separation  \item cosmic ray rejection  \item PSF \subsection{Bias and overscan}  Note that there is a known (unfixed) bug in the PTF pipeline.  The bias overscan removal is done using the median of the entire  overscan, instead of line by line.  The overscan reomval should be done line by line.  For calculation of bias std we should use the normal standard deviation,  rather than more rubost estimators.  \subsection{Flat}  There will be an attempt to develop and build a flat field screen  for ZTF, including an aparatus to measure the spectral response  of the system.  As a backup a program we should have the capability to construct  flat images from the science images.  We identify two problems:  1. These nightly flats are photon limited and have rms scatter of about 5\,mmag.  To fix this we need to construct flats from multiple night data.  2. According to Frank, there are large night to night variations in the flat.  Since we don't expect night to night variation (with the exception of dust particles...),  this may indicate a problem in the way the flats are constructed (e.g., stars and variable sky  are not removed properly prior to flat construction).  We need to investigate this.  Do we want to use a combination of sky flats and star flats?  \subsection{Background and its noise}  Image background and its noise are important components that we need to estimate  for the source detection, image coaddition, and image subtraction steps.  The background can not be treated as a global property of the image,  as the background is poition dependent (e.g., galactic cirus; Geocoronal emission  line variability).  We suggest the following algorithm to measure the sky background and its noise:  Split the image to $128\times128$\,pixels blocks.  {\bf need to understand implications of block size}  In each block calculate an histogram of the pixel values with bin size such that  the typical number of pixels per bin will be 100.  Fit a Gaussian to the histogram and store the background and std.  Finally, interpolate this grid to all pixels, using a cubic interpolation scheme.  We need to remove blocks containing very bright stars (with $>X$\,e$^{-}$).  \subsection{PSF estimation}  TBD.  \subsection{Source extraction}  The source extraction step should use a linear matched filter.  The matched filter should be the best estimated PSF, normalized such its  integral is 1.  The source extraction step should be done in units of sigmas.  The way to achive this is via the following process:  1. estimate the background and noise.  2. subtract the background.  3. Filter the image using its own PSF.  4. calculate the std of the filtered image  5. divide step 3 by step 4 and you will get the normalized likelihood  (units are sigma) that a pixel contains a source.  6. Select source which are 5\sigma above the background.  Note that {\tt SExtractor} do not work in units of sigma and  hence its detection threshold have meaningless units (and variable from image to image).  \subsubsection{Deblending}  TBD.  \subsection{Photometry}  Apply aperture  photometry account and PSF photometry.  The easiest way to implement this is using filtering (e.g., Zackay \& Ofek 2015a).  For aperture photometry, the summation of non-whole pixels is non-trivial.  See Bickerton \& Lupton for a possible solution.  DAOphot like non-linear PSF fitting have several important advantages  in low Galactic latitude fields. However, running on a single PTF field  may take up to several minutes.  This requires some thinking and development.  \subsection{Astrometry}  The astrometry can be divided to two steps.  Pattern matching for stars identification,  and solving for the transformation.  \subsubsection{Pattern matching}  PTF suffers from very high failure rate  (1\% overall) and much higher at low Galactic latitude.  This should be fixed.  We note that current state of the art programs  like {\tt SCamp} and {\tt Astrometry.NET} have two many degrees of freedom  that are not needed in our case.  Specifically we can use solvers that assumes the rotation and plate  scale of the images.  This stratgey will lower the number of failures due to wrong-pattern matching,  and will expedite dramatically the run time.  Another thing that should be utilzed is to check the consistency  between the solutions of all 16 CCDs. This will enable  a very robust check for failures.  We require that the astrometry failure rate for ZTF will be lower than $10^{-4}$.  \subsubsection{The astrometric transformation}  Current schemes (SIP and PV) are not optimal  for brighter-fatter?  \item calibrations: ground based observations.  The main problem is that instead of modeling the refraction effect,  these transformation take it out using high order polynomilas.  We do~not think this is a big problem.  The astrometric transformation should include color terms.  The reference catalog should be the state of the art available  (hopefully GAIA).  We require that the median rms scatter will be below 50\,mas,  and not worse than 100\,mas.  This scatter should be measured locally, in blocks of $256\times256$\pixels  relative photometry  ubercal?  zpvm  sub-pixel to the reference catalog (not 2MASS), and the rms reqyirment  should hold for each block speratly.  \subsection{Image registration}  Image registration relys on three steps:  source matching (discussed in the astrometry section),  solving for the transformation,  and interpolation.  For solving the transformation between two images taken with the same CCD,  where the boresight is roughly the same (to the level of about one arcmin),  we suggest to use the following model:  \begin{equation}  X_{R} = A_{1} + A_{2}X_{N} + A_{3}Y_{N} + R_{R}\sin(PA_{R}) + R_{N}\sin(PA_{N})  Y_{R} = A_{4} + A_{5}X_{N} + A_{6}Y_{N} + R_{R}\cos(PA_{R}) + R_{N}\cos(PA_{N})  \end{equation}  where $A$ and $R$ are the free parameters.  Note that there are no distortion terms here as we assume the images  are taken at the same position and the same CCD.  This should be done on small chuncks of $1024\times1024$\,pix  in order to avoid color terms and high order effects.  It is worth while to try and reduce the block size and see if the rms improves.  This will enable also to estimate the correlation length scale of astrometric scintillations  and to find the best block size for registration.  \subsection{Cosmic rays}  We suggest two simple, fast and robust way to detect cosmic rays.  The first method is by running a linear matched filter  on the image and compare it with the image matched filter  with a delta function (which is a model for a cosmic ray).  The difference between the two images give the normalized log-likelihood  that the source is real rather than a cosmic ray.  The second method is via image subtraction (see Zackay et al. 2015).  \subsection{Coaddition}  See detaild description in Zackay \& Ofek (2015b).  For subtraction we may like a reference image as a function of airmass.  The reason for this is that the atmospheric refraction is color-depandent.  In $g$-band, we expect that an M5V star will shift its position by 0.25''  relative to an A0V star, between altitude of 40 to 90 deg.  How many reference are needed is not clear.  \subsection{Subtraction}  See detaild description in Zackay et al. (2015).  We note that this process returns the normalized likelihood  that a source is a transient, and hence  make the real bogus process redundent.  \subsection{Fakes and detection efficiency}  We suggest to have a parallel fake pipeline in the subtraction step.  The fake pipeline should generate about $\sim1000$ fakes per image,  in magnitude range 18 to 21, and the fake recovery  efficiency effects  \end{itemize} and false alarm  probability, as a function of magnitude, should be reported per image.  \subsection{Astrometry}  See astrometry section.  \subsection{Relative photometry}  \subsection{Calibrated photometry}  \subsection{Source matching}  \subsection{Daily QA matrics}