Introduction

The data reduction pipeline is a series of command line utilities. Each utility takes a series of commandline switches. All steps produce a variety of outputs, and each output should be documented at some point in the future. As of now, the most important outputs are described. The pipeline flow is linear and follows the “path” outlined in the table below.

There is a meta step for the pipeline which is the Makefile generator. This is not formally a pipeline step, but sets up the pipeline. The Makefile generator will be wrapped in a cron job that runs every 10 minutes or so. The Makefile generator is smart enough to understand when an observation set is complete (e.g., all three observations of a science target, all 10 bias images, etc...). Over the course of the night “make” is run and the data are reduced and put on a web page.

l c r Step Name & Input & Output

Bias subtraction & bias files & bias0.1.fits & & & bias2.0.fits & Trace identification & dome flat & dome.fits_segments.npy & & & intermediate products & Wavelength calibration & arclamps & fine.npy & & & intermediate products & Flat fielding & dome flat & flats.npy & Cube Generation & science frame & object_cube.npy & & fine.npy & & & flats.npy & & Object Selection & data cube & object.json & Extraction & object.json & object.npy & object_cube.npy &