adam greenberg edited method.tex  about 10 years ago

Commit id: bc131ac19fea0f6d7facd433a117b3c63df7e9b0

deletions | additions      

       

\subsection{Steepest Descent Routine}  A classical steepest descent routine (SDR) minimizes the weighted residuals between a model and data with Gaussian noise by determining the direction in parameter space in which the $\chi^2$ is decreasing fastest. Specifically, suppose one has a set of $m$ observables, $\vec{z}$ with weights $W$, and a model function $\vec{m}(\vec{x})$, where \vec{x} is an $n$-dimensional parameter vector. Assuming independent data points with Gaussian-distributed errors, the probability of the model matching the data is given by \[p(\vec{m}(\vec{x}) | \vec{z}) \propto p(\vec{z} | \vec{m}(\vec{x})) \propto \exp( \frac{1}{2}\vec{R}^T \frac{-1}{2}\vec{R}^T  W \vec{R})\] where $\vec{R} = \vec{z} - \vec{m}(\vec{x})$