Song Huang edited Introduction.md  over 9 years ago

Commit id: 111b6ff2481d77a0ed929ae87dad1d205fe2de57

deletions | additions      

       

After that, the fake-injected images are passed to the pipeline for source detection and photometric measurements. We cross-match the X-Y coordinates of the fake objects with the ones estimated by the pipeline using a 2 pixel maximun separation. For the ones return a multiple-match, we keep the one with the smallest separation (Claire has tried a different approach, which is keep all the matched objects. It has very small impact on the results). Meanwhile, we also keep record of the ones without any matched objects.   To make sure that the input models reflect the intrinsic distributions of key parameters of the COSMOS galaxy models, we repeat this process 9 times. The same model can be selected in different runs, but it is very rare cases. In general, we have 420-440 different models for Exp, Dev, and Sersic cases. For each model, the average, median, and standard deviation of important photometric parameters are estimated from all the detections (for most cases >10 >15  out of 22), and are used to compared with the input values. Normally, for each run, 5-7% of the fake objects (22 CCDs x 50 Models = 1100 Fake objects) are without any match within 2 pixels. Most of these cases are due to the faintness of the model and/or the approximity of bright objects. At this point, we focus on comparing the input parameters with the magnitude, size, and shape measured by the CModel method in the pipeline. We do notice that, among all the fake objected injected in each run, 6-8% of them have failed CModel photometry. It is not clear what exactly cause this problem. To make the comparison more related to the photometric measurement itself, we furthur exclude all matched detections with \(nChild > 0\) (normally, >10/22).