Olga edited Methods Description.tex  over 9 years ago

Commit id: 57732ec1fc84c55d91bc971e51e1f4cf6ec875b2

deletions | additions      

       

$\textbf{x}=(x_1,...,x_n)$ vector of observations, where ($x_i|z_i=1) \sim N(\mu_{1}, \sigma^2_{1})$, ($x_i|z_i=2) \sim N(\mu_{2}, \sigma^2_{2})$  Expactation-Maximization method consists of E-step expectation step (E-step)  and M-step. maximization step (M-step).  \textit{E-step} 

Marginal likelihood: $L(\mathbf{\theta};\textbf{x};\textbf{z})$=$P(\textbf{x},\textbf{z}|\mathbf{\theta})$=$\prod\limits_{i=1}^n P(Z_i=z_i)f(x_i|\mu_{j}, \sigma^2_{j})$  $Q(\mathbf{\theta}|\mathbf{\theta^{(t)}})=E_{\textbf{z}|\textbf{x},\mathbf{\theta^{(t)}}}(\logL(\mathbf{\theta};\textbf{x};\textbf{z}))$ $Q(\mathbf{\theta}|\mathbf{\theta^{(t)}})=E_{\textbf{z}|\textbf{x},\mathbf{\theta^{(t)}}}(\log L(\mathbf{\theta};\textbf{x};\textbf{z}))$  $T^{(t)}_j=P(Z_i=j|X_i=x_i,\theta^{(t)})=\frac{P(z_{j})f(x_i|\mu^{(t)}_{j}, $T^{(t)}_{j,i}=P(Z_i=j|X_i=x_i,\theta^{(t)})=\frac{P(z_{j})f(x_i|\mu^{(t)}_{j},  \sigma^{2(t)}_{j})}{p^{(t)} f(x_i|\mu^{(t)}_{1}, \sigma^{2(t)}_{1})+(1-p^{(t)})f(x_i|\mu^{(t)}_{2}, \sigma^{2(t)}_{2})}$ $Q(\mathbf{\theta}|\mathbf{\theta^{(t)}})=E_{\textbf{z}|\textbf{x},\mathbf{\theta^{(t)}}}(\log L(\mathbf{\theta};\textbf{x};\textbf{z})) = E[( \log \prod \limits_{i=1}^n L(\mathbf{\theta};x_{i};z_{i})] = \sum\limits_{i=1}^n E[( \log L(\mathbf{\theta};x_{i};z_{i})] = \sum\limits_{i=1}^n \sum\limits_{j=1}^2 T^{(t)}_j[\log T^{(t)}_{j,i}[\log  P(z_{j}) -\frac{1}{2}\log(2\pi) - \frac{1}{2}\log\sigma^{2}_{j} - \frac{(x_{i}-\mu_{j})^2}{2\sigma^{2}_{j}}]$ \textit{M-step}  $\theta^{(t+1)} = \arg \max Q(\theta|\theta^{(t)})$  $\hat{p}^{(t+1)} = \frac{1}{n} \sum\limits_{i=1}^n T^{(t)}_{1,i}$   $\hat{(1-p)}^{(t+1)} = \frac{1}{n} \sum\limits_{i=1}^n T^{(t)}_{2,i}$  t - iterations, continue until |logL(t+1) -logL(t)| <10^{-3}  \subsection{Excess-mass method (EMM)} \label{sec:methods-emm}