Christopher Berry Fixing refs  almost 9 years ago

Commit id: b3633ba96a1c657615c40599c950ed5c7c7e4a1c

deletions | additions      

       

Performing a fully spinning analysis is computationally expensive. The main computational cost is generating the SpinTaylorT4 waveform, which must be done each time the likelihood is evaluated at a different point in parameter space. Progress is being made in reducing the cost of generating waveforms and evaluating the likelihood \citep[e.g.,][]{Canizares_2013,P_rrer_2014}; employing reduced order modelling can speed up the non-spinnig TaylorF2 analysis by a factor of $\sim 30$ \citep{Canizares_2015}. However, this is still to be done for a waveform that includes the effects of two unaligned spins.  In figure \ref{fig:wall-time}, we present the approximate wall time taken for analyses comparable to those presented here. The low-latency \textsc{bayestar} and the low-latency fully spinning SpinTaylorT4 results are for the $250$ events considered here. The medium-latency non-spinning TaylorF2 results are from \citet{Berry_2014}; these are not for a different set of signals, but represent a similar population (in more realistic non-Gaussian noise), representing what we hope to achieve in reality.\footnote{We use the more reliably estimated figures for the \textsc{LALInference} runs.} The wall times for \textsc{bayestar} are significantly reduced compared to those in \citet{Berry2015} \citet{Berry_2014}  because of recent changes to how \textsc{bayestar} integrates over distance \citep{SingerPrice2015}: the mean (median) time is $4.6~\mathrm{s}$ ($4.5~\mathrm{s}$) and the maximum is $6.6~\mathrm{s}$. We assume that $2000$ (independent) posterior samples are collected for both of the \textsc{LALInference} analyses. The number of samples determines how well we can characterize the posterior: $\sim2000$ is typically needed to calculate $\mathrm{CR}_{0.9}$ to $10\%$ accuracy \citep{DelPozzo_2015}. In practice, we may want to collect additional samples to ensure our results are accurate, but preliminary results could also be released when the medium-latency analysis has collected $1000$ samples, which would after half the time shown here with a maximum wall time of $5.87\times10^4~\mathrm{s} \simeq 16~\mathrm{hr}$. We see that the fully spinning analysis is significantly (here a factor of $\sim20$) more expensive than the non-spinning analysis, taking a mean (median) time of $1.36\times10^6~\mathrm{s} \simeq 16~\mathrm{days}$ ($9.96\times10^5~\mathrm{s} \simeq 12~\mathrm{days}$) and a maximum of $7.03\times10^6~\mathrm{s} \simeq 81~\mathrm{days}$. The times shown in figure \ref{fig:wall-time} illustrate the hierarchy of times associated with different analyses. However, they should not be used as exact benchmarks for times expected during the first observing run of aLIGO because the version of \textsc{LALInference} used here are not the most up-to-date versions. Following the detection pipeline identifying a candidate BNS signal, we expect \textc{bayestar} results with latency of a few seconds, non-spinning \textsc{LALInference} results with a latency of a few hours, and fully spinning \textsc{LALInference} results only after weeks of computation.   While work is underway to improve the latency of and to optimize parameter estimation with \textsc{LALInference}, there is also the possibility of developing new algorithms that provide parameter estimates with lower latency \citep{Pankow:2015cra,Haster_2015}. \citep{Haster_2015,Pankow:2015cra}.  Improving computational efficiency is important for later observing runs with the advanced-detector network: as sensitivities improve and lower frequencies can be measured, we need to calculate longer waveforms (at even greater expense).