Carson edited sectionSection_title.tex  over 8 years ago

Commit id: 217587de2671b49de4d145f79be9b4dff3d1eb43

deletions | additions      

       

\begin{equation}\dot{V}=\frac{dV}{dt}\end{equation}  It isn't necessary to use $\dot{V}$ instead of $\frac{dV}{dt}$, but is helpful when writing out the equations. Each variable in these equations has some effect on the tumor growth. One example is $\lambda$, the "intrinsic growth rate" of the tumor. Another is $a$, a placeholder for an exponent. Then $K$ is the "carrying capacity" of the tumor, and $b$ is shorthand for $\frac{1}{K}$, the inverse of the carrying capacity.  \\  Now that we know what the equations define, we can differentiate between them. \eqref{ExponentialFit} \eqref{eq:ExponentialFit}  is the Exponential Fit Equation. \subsection{Data Selection}  \subsection{Fitting Procedure}   After selecting the data, we fit each model to the data sets we gathered. For each of the seven models we tested, we gave each parameter the ability to change freely in a way so that the Sum of Squared Residuals was normalized and then minimized between the data and the curve. The normalization of the SSR values served the purpose of preventing the larger values at the end of data sets from being weighted more than those at the beginning. Without this normalization, many curves fit only to the initial and final points, while the data contained within these values were ignored by the minimization of the SSR function. The normalization applied an inverse square law to the SSR values so that the data contained in later values was reduced by a great amount and the data contained in early values was reduced by a small amount. The data as represented on our graphs was not normalized, only the SSR values that correspond to each point were.   \\   To select the best fit model we used the Akaike Information Criterion corrected for finite sample sizes. This measure describes how much information is gained by extra parameters when fitting models. Therefore, a better AICc means a balance between the ability of a model to predict the data and the minimization of number of parameters. This measure has an advantage over similar equations, because the main limiting factor is the number of parameters. It is especially useful when compared to the Bayesian Information Criterion that operates under the assumption that the number of data points is much greater than the number of parameters, which is an issue when many sources for tumor growth data do not include enough data points to satisfy this condition. (Citation?) (Also a weirdly worded sentence, I'll fix that). The AICc does require a certain number of data points, but is explicit in that if not enough data is used, the equation will render a zero in the denominator returning an error message. This is helpful in that it prevents conclusions from being drawn from unsubstantial data sets.  \subsection{Statistical Analysis}