# Alex Dvornikov, Eric McLaughlin, Deekshitha Manjunath

Before outlining the code, we note, in the hopes of not belaboring the point, that image processing is only possible if we are granted the images. Besides the science images we need the biases and flats. We collected the images at Maunt Laguna Observatory on September 29, 2015. Below is our Python code to process them. In other words, to extract the science, to hopefully toss away the noise and the telescope imperfections which are manifest in the raw science images.

First, we import the necessary methods and create the lists which we will populate and manipulate. What follows below are the functions and their brief explanations.

 from glob import glob import numpy import pyfits from array import array import datetime biaslist = [] flatlist_r = [] flatlist_h_al = [] path = '/Users/compphysadmin/Desktop/SDSU_Fall_2015/A680/HW6/150929.mlo40/' files = path + 'a*.fit' fnames = glob(files) fnames = sorted(fnames) 

Then, for each image, we find the mean of the overscan and subtract it. The overscan stretches from x pixel 2068 to 2200. Due to the reverse labeling of rows and columns implemented by Python we transpose the image before subtracting the overscan. The new images, sans overscan, are saved into the working directory.

 def remove_overscan(fnames = fnames): for i, fname in enumerate(fnames): hdu = pyfits.open(fname) image = hdu[0].data header = hdu[0].header if header['OBJECT'] != 'focus': overscan = numpy.mean(image[:,2068:2200], axis=1) newimage = (image.transpose() - overscan).transpose() newimage = newimage[:,0:2067] hdu = pyfits.PrimaryHDU(newimage,header) hdu.writeto(path+'pro_'+header['IRAFNAME']+'.fit') return