adam greenberg edited method.tex  about 10 years ago

Commit id: 2c6509d077f46e79afaa0fe0e382d1104d9f76cd

deletions | additions      

       

\subsection{Steepest Descent Routine}  A classical steepest descent routine (SDR) minimizes the weighted residuals between a model and data with Gaussian noise by determining the direction in parameter space in which the $\chi^2$ is decreasing fastest. Specifically, if suppose  one has a set of $m$  observables, $\vec{z}$ , and a model $M(x_1,x_2,...,x_n)$ function $M(\vec{x})$, where \vec{x} is an $n$-dimensional parameter vector. Assuming independent data points with Gaussian-distributed errors, the probability of the model matching the data is given by $p(M(\vect{x})|\vect{z}) \propto p(\vect{z}|M(\vect{x})) \propto \exp(-5)$  The \emph{characteristic polynomial} $\chi(\lambda)$ of the