In the context of testing Einstein’s General Theory of Relativity (GR) using continuous gravitational waves (GWs), we can define all possible signal models as subsets of the most generic conceivable signal (Isi 2015): \[s (t) = \frac{1}{2} \sum_p a_p A_p(t),\] for the antenna patterns \(A_p\), a complex coefficient \(a_p\) and with the sum over all possible polarizations: plus (\(+\)), cross (\(\times\)), vector x (x), vector y (y) breathing (b) and longitudinal (l). Note longitudinal mode is fully degenerate with breathing and can be safely excluded. The GR hypothesis is obtained by setting: \[\label{eqn:a_plus} a_+ = h_0 (1+\cos^2 \iota)/2,\] \[a_\times = h_0 \cos \iota~ e^{-i\pi/2},\] \[\label{eqn:a_others} a_{\rm x} = a_{\rm y} = a_{\rm b} = a_{\rm l} = 0,\] where \(\iota\) is the inclination angle and \(h_0\) an overall amplitude determined by the properties of the source.

Note that the polarizations can be categorized in terms of their associated graviton spin: tensor (\(+\), \(\times\)), vector (x, y) and scalar (b, l). These groups are not separable: a signal model that includes one element, must also include the other (e.g. it is not possible to have a model that allows plus \(+\) but not \(\times\)). The reason for this is that the distinction between modes of a same spin is contingent on the relative orientation of source and detector (i.e. the difference is not intrinsic to the signal).

In order for our notation to agree with that used in aureplacedverbaa , for each polarization \(p\), parametrize \(a_p\) by real amplitude \(h_p\) and relative phase \(\phi_p\) such that \(a_p=h_p e^{i\phi_p}\). Furthermore, define an angle \(\phi_s\) for each spin \(s\) and an internal offset \(\psi_s\) between components of a given spin; that way: \[\phi_+ = \phi_{\rm t}\] \[\phi_\times = \phi_{\rm t} + \psi_{\rm t}\] \[\phi_{\rm x} = \phi_{\rm v}\] \[\phi_{\rm y} = \phi_{\rm v} + \psi_{\rm v}\] \[\phi_{\rm b} = \phi_{\rm s}\] \[\phi_{\rm l} = \phi_{\rm s} + \psi_{\rm s}\]

Given some data, the three relevant hypotheses we must consider are: *Gaussian noise* (\({\cal H}_{\rm n}\)), *GR signal with Gaussian noise* (\({\cal H}_{\rm GR}\)), *non–GR signal with Gaussian noise* (\({\cal H}_{\rm nGR}\)). The first two are computed in LALSuite directly; the last one is not so straightforward: the *non–GR signal* hypothesis is a composite hypothesis made up of the “or” junction of the hypotheses corresponding to the presence of *all* the possible signals that are not allowed by GR (Li 2012).

Broadly speaking, there are two ways in which GR can be violated: there may be some vector or scalar component *in addition to* the GR signal or the signal may be composed by any other arbitrary combination of the six polarizations that is *different* from eqs. (\ref{eqn:a_plus}-\ref{eqn:a_others}). Let us denote the former by \({\cal H}_{\rm GR+v}\), \({\cal H}_{\rm GR+s}\) and \({\cal H}_{\rm GR+sv}\), while the latter can be indexed by their component spins, e.g. \({\cal H}_{\rm sv}\) representing the *scalar–vector* model. In this notation, \({\cal H}_{\rm nGR}\) can be obtained from the eight non–GR sub–hypotheses by: \[\label{eq:hypotheses}
{\cal H}_{\rm nGR} = {\cal H}_{\rm GR+s} \lor {\cal H}_{\rm GR+v} \lor {\cal H}_{\rm GR+sv} \lor {\cal H}_{\rm s} \lor {\cal H}_{\rm t} \lor {\cal H}_{\rm v} \lor {\cal H}_{\rm st} \lor {\cal H}_{\rm sv} \lor {\cal H}_{\rm tv} \lor {\cal H}_{\rm stv}\] Note that \({\cal H}_{\rm t}\) is logically different from \({\cal H}_{\rm GR}\), since the latter is more restrictive. **[MI: even then, we might decide, based on physics/geometry, that the only possible relation between tensor modes is as given in GR (i.e. phase difference of \(\pi/2\)); in that case, we could discard \({\cal H}_{\rm t}\).]**

Our goal is to compare the probability of \({{\cal H}_{\rm nGR}}\) vs. \({{\cal H}_{\rm GR}}\). For a set of data \(d\) and general information \(I\) (suppressed from our expressions), this is formally given by the odds ratio: \[O^{\rm nGR}_{\rm GR} \equiv \frac{P({{\cal H}_{\rm nGR}}|d)}{P({{\cal H}_{\rm GR}}|d)}=\frac{P({{\cal H}_{\rm nGR}})}{P({{\cal H}_{\rm GR}})} {B^{\rm nGR}_{\rm GR}},\] where we have used Bayes theorem to write the odds ratio as the product of the *Bayes factor* \({B^{\rm nGR}_{\rm GR}}=P(d|{{\cal H}_{\rm nGR}})/P(d|{{\cal H}_{\rm GR}})\) and the *prior odds* \(P({{\cal H}_{\rm nGR}})/P({{\cal H}_{\rm GR}})\).

The prior odds encode our prior beliefs on each hypothesis irrespective of the data and should be normalizable to unity. In our case, since \({{\cal H}_{\rm nGR}}=\neg{{\cal H}_{\rm GR}}\) by construction, let \[P({{\cal H}_{\rm nGR}})=\alpha,\] \[P({{\cal H}_{\rm GR}})=1-\alpha,\] for some \(\alpha\) such that \(0<\alpha<1\). Now, \({{\cal H}_{\rm nGR}}\) is composed by several *independent* child hypotheses to which a relative prior must be applied: \[\alpha = P({{\cal H}_{\rm nGR}}) = \sum_{i=1}^N P({{\cal H}_{\rm GR}}),\] where the sum is over the child hypothesis enumerated in eq. (\ref{eq:hypotheses}). However, in general there is no reason to prefer *a priori* any one child over the others and, in that case all the priors are the same. Calling that value \(\beta\), then: \[\alpha = \sum_{i=1}^N \beta = N \beta \implies \beta = \alpha/N\] where, in the case at hand, the total number of sub–hypotheses is \(N=10\).

The other component needed is the Bayes factor. Although the nested sampling code always computes Bayes factors for the presence of a given signal vs. noise, we can use that to construct the desired quantity: \[\label{eq:BnGR_GR} {B^{\rm nGR}_{\rm GR}} = \frac{P(d|{{\cal H}_{\rm nGR}})}{P(d|{{\cal H}_{\rm GR}})} = \frac{P(d|{{\cal H}_{\rm nGR}})}{P(d|{{\cal H}_{\rm n}})} \frac{P(d|{{\cal H}_{\rm n}})}{P(d|{{\cal H}_{\rm GR}})}= \frac{{B^{\rm nGR}_{\rm n}}}{{B^{\rm GR}_{\rm n}}}.\]

Putting it all together the probability of \({{\cal H}_{\rm nGR}}\) vs \({{\cal H}_{\rm GR}}\) given some data is: \[\begin{aligned}
O^{\rm nGR}_{\rm GR} &\equiv \frac{P({{\cal H}_{\rm nGR}}|d)}{P({{\cal H}_{\rm GR}}|d)} = \sum_{i=1}^{N} \frac{P({{\cal H}_{\rm }}_i|d)}{P({{\cal H}_{\rm GR}}|d)} = \sum_{i=1}^{N} \frac{P({{\cal H}_{\rm }}_i)}{P({{\cal H}_{\rm GR}})} B^{i}_{\textrm{GR}} \nonumber \\
&= \frac{\alpha}{1-\alpha}\frac{1}{N} \sum_{i=1}^{N} B^{i}_{\textrm{GR}} = \frac{\alpha}{1-\alpha}\frac{1}{N} \sum_{i=1}^{N} \frac{B^{i}_{\textrm{n}}}{{B^{\rm GR}_{\rm n}}}.\end{aligned}\] Setting \(\alpha=0.5\) and \(N=10\), this becomes explicitly: \[O^{\rm nGR}_{\rm GR} = \frac{1}{10 {B^{\rm GR}_{\rm n}}}\left(B^{\rm GR+s}_{\rm n} + B^{\rm GR+v}_{\rm n} + B^{\rm GR+sv}_{\rm n} + B^{\rm s}_{\rm n} + B^{\rm t}_{\rm n} + B^{\rm v}_{\rm n} + B^{\rm sv}_{\rm n}+ B^{\rm st}_{\rm n} + B^{\rm tv}_{\rm n}+ B^{\rm stv}_{\rm n}\right).\] This requires running `lalapps_pulsar_parameter_estimation_nested`

11 times per data set (pulsar and science run).

## Share on Social Media