In the previous chapters on statistical methods used to develop species distribution models, it was noted that an important aspect of model building is model (variable) selection based on measures of model fit such as D2 (explained deviance) or information theoretic measures such as Akaike Information Criterion (AIC; see Chapter 6). This chapter addresses the important step of model evaluation. In species distribution modeling, evaluating habitat suitability models and the resulting predictive maps has focused on quantifying prediction accuracy as a measure of model performance or validity (Table 9.1; criterion 7), as described in Section 9.3. But predictive performance is really only one aspect of model validity. In this introduction, I will outline, more broadly, the many faces of error or uncertainty in SDM.
One broad and useful definition that has been given for model validity is: validation means that a model is acceptable for its intended use because it meets specified performance requirements (Rykiel, 1996). Performance can be measured by a number of criteria (Morrison et al., 1998). These criteria can be applied at different stages of model development described in the introduction to Part III – conceptual formulation, statistical formulation and model calibration – as well as in the subsequent model evaluation steps (Table 9.1). To reiterate, in SDM, model evaluation has tended to focus on predictive performance, but other criteria, such as ecological realism, spatial pattern of error, and model credibility (acceptability to the user community) are also important.