Dylan Barth edited sectionSection_title.tex  almost 9 years ago

Commit id: 155a3b9712d9ccbb949e7b52d856b77fac809f78

deletions | additions      

       

\subsection{Models}  \subsection{Data Selection}  \subsection{Fitting Procedure}  After selecting the data, we fit each model to the data sets we gathered. For each of the seven models we tested, we gave each parameter the ability to change freely in a way so that the Sum of Squared Residuals was normalized and then minimized between the data and the curve. The normalization of the SSR values served the purpose of preventing the larger values at the end of data sets from being weighed more than those at the beginning. Without this normalization, many curves fit only to the initial and final points, while the data contained within these values were ignored by the minimization of the SSR function. The normalization applied an inverse square law to the SSR values so that the data contained in later values was reduced by a great amount and the data contained in early values was reduced by a small amount. The data as represented on our graphs was not normalized, only the SSR values that correspond to each point were. To select each model we used the Akaike Information Criterion corrected for finite sample sizes. This measure describes how much information is gained by extra parameters when fitting models. Therefore, a better AICc means a balance between the ability of a model to predict the data and the minimization of number of parameters. This measure has an advantage over similar equations, because the main limiting factor is the number of parameters. This is especially useful when compared to the Bayesian Information Criterion that operates under the assumption that the number of data points is much greater than the number of parameters, which is an issue when many sources for tumor growth data do not include enough data points to satisfy this condition. (Citation?) (Also a weirdly worded sentence, I'll fix that). \subsection{Statistical Analysis}