Prony analysis [Technical report]

Consider the following system of signals:

\[S = \{ (a_j, x_j) \}, \quad j = 1, \ldots , N\]

where \(a_j\) are signal amplitudes and \(x_j\) are signal positions. \(\{a_j\}\) and \(\{x_j\}\) may be represent by real or complex numbers.

Define the Prony map as a map \(\{ (a_j, x_j) \} \mapsto \{m_k\}\) of the following form: \[\label{eq:prony} m_k = \sum_{j=1}^N a_j x_j^k.\]

Numbers \(\{m_k\}\) are moments of the system, equation \eqref{eq:prony} is the Prony equation system.

There is a special case of the Prony system:

\[S_t = \{ (a_j, \mu_j) \}; \quad m_k = \sum_{j=1}^N a_j e^{-2\pi i \mu_j k \delta}\]

where \(\delta \in \mathbb{R}_+\) is a parameter. We say that it is the trigonometric Prony map. If \(|x_j| = 1\), we can reduce general Prony map to the trigonometric case by \(x_j = e^{-2\pi i \mu_j \delta}\).

In the practical application, we usually know only moments of a source system and we want to reconstruct original amplitudes and positions of signals. This means that we should build the inverse Prony map. Unfortunately, measurements of moments frequently include some noise. A feature of the inverse Prony map is the following: even small noise can produce a significant reconstruction error. Therefore, it is very important to understand geometry of the Prony map and the inverse Prony map.

In this technical report, we consider some examples of the direct and inverse Prony map and build corresponding plots. You can also view the source code on the R language for each plot.

See also: (Akinshin 2015), (Azaïs in press), (Batenkov 2014), (Batenkov 2014a), (Batenkov 2014b), (Batenkov 2014c), (Batenkov 2013), (Batenkov 2013a), (Candès 2014), (Candès 2013), (Demanet 2013), (Donoho 1992), (Donoho 1989), (Duval 2013), (Fernandez-Granda 2013), (Heckel 2014), (Levy 1981), (Liao 2014), (McCutchen 1967), (Minami 1985), (Moitra 2014), (Odendaal 1994), (Slepian 1978), (Yomdin 2010), (Yomdin 2005), (Demanet 2014), (Morgenshtern 2014).

Let’s consider the following system:

\[m_k = \sum_{j=1}^N a_j x_j^k, \quad k \in \{0, 1, \ldots, 2N-1\}.\]

Introduce variables \(\{u_j\}\):

\[\begin{cases} u_1 = x_1 + x_2 + \ldots + x_N, \\ u_2 = x_1x_2 + x_1x_3 + \ldots + x_{N-1}x_N, \\ \ldots \\ u_N = x_1x_2\ldots x_N. \end{cases}\]

From the Viet’s theorem, we have:

\[\label{eq3} m_{l} = m_{l-1}u_1 - m_{l-2}u_2 + \ldots + (-1)^{N-1}m_{l-N} u_N.\]

Thus, \(\{u_j\}\) are solution of the following system of equations:

\[\begin{pmatrix} m_{N-1} & -m_{N-2} & \ldots & (-1)^{N-1} m_0 \\ m_{N} & -m_{N-1} & \ldots & (-1)^{N-1} m_1 \\ \vdots & \vdots & \ddots & \vdots \\ m_{2N-2} & -m_{2N-3} & \ldots & (-1)^{N-1} m_{N-1} \end{pmatrix} \times \begin{pmatrix} u_1 \\ u_2 \\ \vdots \\ u_N \end{pmatrix} = \begin{pmatrix} m_N \\ m_{N+1} \\ \vdots \\ m_{2N-1} \\ \end{pmatrix}.\]

We can find \(\{x_j\}\) from the following equation:

\[x^N - u_1 x^{N-1} + \ldots + (-1)^N u_N = 0.\]

We can find \(\{a_j\}\) as a solution of the following system:

\[\begin{pmatrix} 1 & 1 & \ldots & 1 \\ x_1 & x_2 & \ldots & x_d \\ x_1^2 & x_2^2 & \ldots & x_N^2 \\ \vdots & \vdots & \ddots & \vdots \\ x_1^{N-1} & x_2^{N-1} & \ldots & x_N^{N-1} \\ \end{pmatrix} \times \begin{pmatrix} a_1 \\ a_2 \\ a_3\\ \vdots \\ a_N \end{pmatrix} = \begin{pmatrix} m_0 \\ m_1 \\ m_2 \\ \vdots \\ m_{N-1} \end{pmatrix}.\]