this is for holding javascript data
jBillou edited Waveform optimization.tex
over 9 years ago
Commit id: 0a4c25b2252d2de38878b57fc8abb078244c1996
deletions | additions
diff --git a/Waveform optimization.tex b/Waveform optimization.tex
index 3f43d67..9d4f18a 100644
--- a/Waveform optimization.tex
+++ b/Waveform optimization.tex
...
\subsection{Waveform optimization}
Our data are composed of a set of $N$ times traces $D=\{d_t^i\}$, with $i \in [1,..N]$ and $t \in [1,..
T^i]$. T_i]$.
Given the data, the hidden states $X=\{x_t^i\}=\{(\phi_t^i,\lambda_t^i)\}$ and the parameters $\Theta$, we want to maximimze the likelihood of the data $P(D|\Theta)$ over $\Theta$.
...
In the context of our HMM the joint distribution $P(D,X|\Theta)$ is given by $\prod_i^N P(d_t^i,x_i^t|\Theta) = \prod_i^N P(d_t^i|x_i^t,\Theta)P(x_i^t|\Theta)$. Here $P(d_t^i|x_i^t,\Theta)$ is simply the emission probability and is proportional to $\exp[ -(d_t^i - m(x_t^i))^2]$.
The joint distribution of hidden states $P(X|D,\Theta)$ is given by $\prod_i^N P(x_t^i|d_t^i,\Theta)$.
substituting into Eq.\ref{eqn:Qem} we find:
$\int_X
\sum_i^N[ \sum_i^N [ (d_t^i - m(x_t^i))^2 + \log(P(x_i^t|\Theta_{k}))]P(X|D,\Theta_{k-1})$
Next we want to take the derivative of this expression with respect with the components of $\Theta_{k}$ that defines the waveform and equate it to zero to find
the maximum of this function. its maximum. As $\log(P(x_i^t|\Theta_{k}))]P(X|D,\Theta_{k-1})$ does not depend on the waveform, it
kind can be neglected, as well as any constant multiplicative
or additive factors.