this is for holding javascript data
adam greenberg edited method.tex
about 10 years ago
Commit id: 2800b67591af6f64a8ce251e2cbf51720d22f392
deletions | additions
diff --git a/method.tex b/method.tex
index 7846403..23736c0 100644
--- a/method.tex
+++ b/method.tex
...
\subsection{Steepest Descent Routine}
A classical Gauss-Newton routine (GNR) minimizes the weighted residuals between a model and data with Gaussian noise by determining the direction in parameter space in which the $\chi^2$ is decreasing fastest. Specifically, suppose one has a set of $m$ observables, $\vec{z}$ with weights $W$, and a model function $\vec{m}(\vec{x})$, where \vec{x} is an $n$-dimensional parameter vector. Assuming independent data points with Gaussian-distributed errors, the probability of the model matching the data is given by \[p(\vec{m}(\vec{x}) | \vec{z}) \propto p(\vec{z} | \vec{m}(\vec{x})) \propto \exp(
\frac{-1}{2}\vec{R}^\intercal -\frac{1}{2}\vec{R}^\intercal W \vec{R})\] where $\vec{R} = \vec{z} - \vec{m}(\vec{x})$ . Therefore maximizing the model probability is the same as minimizing the value \[\chi^2(\vec{x}) = \vec{R}^\intercal W \vec{R}\]
Perturbing $\vec{x}$ by some amount, $\del \vec{x}$