Christopher Berry Adding ref  almost 9 years ago

Commit id: 3109d597dbde05b9f0f6bd9bdd4d77e25ee9bd1d

deletions | additions      

       

In figure \ref{fig:wall-time}, we present the approximate wall time taken for analyses comparable to those presented here. The low-latency \textsc{bayestar} and medium-latency non-spinning TaylorF2 results are from \citet{Berry_2014}.\footnote{We use the more reliably estimated figures for the \textsc{LALInference} runs.} These are not for the set of $250$ signals analysed here, but represent a similar population (in more realistic non-Gaussian noise), representing what we hope to achieve in reality. We assume that $2000$ (independent) posterior samples are collected for both of the \textsc{LALInference} analyses. The number of samples determines how well we can characterize the posterior: $\sim2000$ is typically needed to calculate $\mathrm{CR}_{0.9}$ to $10\%$ accuracy \citep{DelPozzo_2015}. In practice, we may want to collect additional samples to ensure our results are accurate, but preliminary results could also be released when the medium-latency analysis has collected $1000$ samples, which would after half the time shown here with a maximum wall time of $5.87\times10^4~\mathrm{s} \simeq 16~\mathrm{hr}$. We see that the fully spinning analysis is significantly (here a factor of $\sim20$) more expensive than the non-spinning analysis, taking a mean (median) time of $1.36\times10^6~\mathrm{s} \simeq 16~\mathrm{days}$ ($9.96\times10^5~\mathrm{s} \simeq 12~\mathrm{days}$) and a maximum of $7.03\times10^6~\mathrm{s} \simeq 81~\mathrm{days}$.  The times shown in figure \ref{fig:wall-time} illustrate the hierarchy of times associated with different analyses; however, they should not be used as exact benchmarks for times expected during the first observing run of aLIGO because the codes used here are not the most up-to-date versions. In particular, recent changes to how \textsc{bayestar} integrates over distance have reduced the wall time to $\sim 4$--$15~\mathrm{s}$. 4$--$15~\mathrm{s}$ \citep{SingerPrice2015}.  Following the detection pipeline identifying a candidate BNS signal, we expect \textc{bayestar} results with latency of a few seconds, non-spinning \textsc{LALInference} results with a latency of a few hours, and fully spinning \textsc{LALInference} results only after weeks of computation. While work is underway to improve the latency of and to optimize parameter estimation with \textsc{LALInference}, there is also the possibility of developing new algorithms that provide parameter estimates with lower latency \citep{Pankow:2015cra}. Improving computational efficiency is important for later observing runs with the advanced-detector network: as sensitivities improve and lower frequencies can be measured, we need to calculate longer waveforms (at even greater expense).