loading page

Modeling the Swift BAT Trigger Algorithm with Machine Learning
  • Philip B. Graff
Philip B. Graff
University of Maryland

Corresponding Author:[email protected]

Author Profile

Abstract

To draw inferences about gamma-ray burst (GRB) source populations based on Swift observations, it is essential to understand the detection efficiency of the Swift burst alert telescope (BAT). This study considers the problem of modeling the Swift BAT triggering algorithm for long GRBs, a computationally expensive procedure, and models it using machine learning algorithms. A large sample of simulated GRBs fromĀ \cite{Lien2014} is used to train various models: random forests, boosted decision trees (with AdaBoost), support vector machines, and artificial neural networks. The best models have accuracies of \(\gtrsim 97\%\) (\(\lesssim 3\%\) error), which is a significant improvement on a cut in GRB flux which has an accuracy of \(89.6\%\) (\(10.4\%\) error). These models are then used to measure the detection efficiency of Swift as a function of redshift \(z\), which is used to perform Bayesian parameter estimation on the GRB rate distribution. We find a local GRB rate density of \(n_{0}\sim 0.48\ {\rm Gpc}^{-3}{\rm yr}^{-1}\) with power-law indices of \(n_{1}\sim 1.7\) and \(n_{2}\sim-5.9\) for GRBs above and below a break point of \(z_{1}\sim 6.8\). This methodology is able to improve upon earlier studies by more accurately modeling Swift detection and using this for fully Bayesian model fitting. The code used in this is analysis is publicly available online11https://github.com/PBGraff/SwiftGRB_PEanalysis.