Survey for Fast Radio Bursts

We are now conducting a large VLA survey for the highly-dispersed radio transients known as FRBs. The goal of the survey is to detect at least one FRB, localize it to arcsecond precision, and uniquely associate it with other objects. Assuming that the FRB has a host galaxy, unique associations can be made with arcsecond localizations out to a redshift of 1 \cite{2002AJ....123.1111B}.

This survey uses the VLA correlator to write an integration each 5 ms. Faster integrations would be more sensitive to the \(\sim1\) ms FRB pulse widths, but those data rates are not sustainable. Assuming that FRBs uniformly populate a cosmological volume, we expect to detect one FRB in roughly 35 hours of observing. We have targeted five locations at high Galactic latitudes to avoid confusion with Galactic dispersion.

Our goal is to observe 150 hours to detect 1–10 FRBs or exclude the published event rate with 99% confidence. At the time of this writing (January 2014), we have observed 78 hours and processed roughly half of that. No events have been found brighter than our confidence threshold of 8\(\sigma\), which is equivalent to a flux density of 130 mJy. At this threshold, we expect less than one false positive due to Gaussian noise in the entire survey.

The transient search pipeline is currently running on the NRAO Array Operations Center (AOC) cluster and on the “Darwin” cluster at Los Alamos National Lab (LANL). Data is transferred to LANL by mailing disks. We also have approved compute time and storage on the NERSC compute center. The search pipeline parallelizes DM trials over cores of a node and different time segments (“scans” in VLA parlance) are parallelized over nodes of the cluster.

The processing time and memory footprint are dominated by the FFT stage. The size of the image grid is determined by the VLA antenna configuration and ranges from 512–2048 on a side. In the more compact of these configurations (called “CnB”), the processing pipeline can search one hour of data in 70 hours on a single node, equivalent to roughly 340 images per second per node. The majority of our data were observed in a larger configuration (called “B”) and processed several times slower.