this is for holding javascript data
blasbenito edited materials_and_methods.tex
over 8 years ago
Commit id: 9a77b73bc38daff4d201a080d185e3574ecf2e32
deletions | additions
diff --git a/materials_and_methods.tex b/materials_and_methods.tex
index 8321e4c..e2e073f 100644
--- a/materials_and_methods.tex
+++ b/materials_and_methods.tex
...
\paragraph{Model selection and ensemble model forecasting}
We faced three different issues to evaluate our models: 1) Considering the lack of
absences made it impossible to evaluate the commission error; 2) the low amount of presences prevented the use of data splitting to evaluate omission errors: 3) quasibinomial GLMs in R do not provide AIC values, making it difficult to rank the candidate models according to both model fit and complexity. To deal with these issues whilst at the same time providing a robust model evaluation framework, presence records available, we used a leave-one-out approach to compute AUC values based on 1000 pseudoabsences not used to calibrate the model. This allowed
us to evaluate relative omission error \cite{Fielding199738,Phillips_2008}, and explained deviance adjusted by the number of degrees of freedom of the model (D squared, or adjusted explained deviance) to assess goodness of fit \cite{Guisan_1999}.
The leave-one-out approach was computed for each model as follows: 1) A testing presence record was selected; 2) All presences and background data within 2.5 degrees radius ($\sim$280 km) around the testing presence were removed; 3) A GLM model (quasibinomial family with weighted background) was calibrated without the testing presence record; 4) Adjusted explained deviance for the GLM was computed as:
1-((cases - 1)/(cases - predictors))*(1 - ((null deviance - deviance) / null deviance)); 5) AUC was computed as the proportion of pseudoabsences with a habitat suitability value lower than the habitat suitability of the test presence \cite{Fielding199738}.