Linking error measures to model questions

Models for forecasting various ecosystem properties have great potential that comes with a need for model validation. Before we can perform such validation, we need to define what it means for the model to perform well, which depends on the question being asked. Often, it seems easy to ignore the mo...

Full description

Saved in:
Bibliographic Details
Published inEcological modelling Vol. 487; p. 110562
Main Authors Jacobs, Bas, Tobi, Hilde, Hengeveld, Geerten M.
Format Journal Article
LanguageEnglish
Published 01.01.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Models for forecasting various ecosystem properties have great potential that comes with a need for model validation. Before we can perform such validation, we need to define what it means for the model to perform well, which depends on the question being asked. Often, it seems easy to ignore the model question and take a standard well-known error measure for comparing the model to the available data. The question is whether this practice is adequate. Here, we defined different types of model-data mismatches that may be more or less relevant to different types of questions. We show that error measures differ in their sensitivity to the type of mismatch and robustness to sparse and noisy data. The results imply that a careful selection of error measures, using a clearly defined ecological question as a starting point, is vital to proper model evaluation. While we present our results as generally applicable to the validation of any type of forecasting model, we also illustrate them using cyanobacterial bloom modelling as a detailed example of a case where different questions could be asked of the same model.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0304-3800
DOI:10.1016/j.ecolmodel.2023.110562