Statistical Inference and Decision Making in Conservation Biology

Since the formulation of hypothesis testing by Neyman and Pearson in 1933, the approach has been subject to continuous criticism. Yet, until recently this criticism, for the most part, has gone unheeded. The negative appraisal focuses mainly on the fact that P-values provide no evidential support fo...

Full description

Saved in:
Bibliographic Details
Published inIsrael journal of ecology & evolution Vol. 57; no. 4; pp. 309 - 317
Main Author Saltz, David
Format Journal Article
LanguageEnglish
Published Science From Israel, a Division of LPPLtd 01.12.2011
Subjects
Online AccessGet full text
ISSN1565-9801
1565-9801
DOI10.1560/IJEE.57.4.309

Cover

Loading…
More Information
Summary:Since the formulation of hypothesis testing by Neyman and Pearson in 1933, the approach has been subject to continuous criticism. Yet, until recently this criticism, for the most part, has gone unheeded. The negative appraisal focuses mainly on the fact that P-values provide no evidential support for either the null hypothesis (H0) or the alternative hypothesis (Ha). Although hypothesis testing done under tightly controlled conditions can provide some insight regarding the alternative hypothesis based on the uncertainty of H0, strictly speaking, this does not constitute evidence. More importantly, well controlled research environments rarely exist in field-centered sciences such as ecology. These problems are manifestly more acute in applied field sciences, such as conservation biology, that are expected to support decision making, often under crisis conditions. In conservation biology, the consequences of a Type II error are often far worse than a Type I error. The "advantage" afforded to H0 by setting the probability of committing a Type I error (α) to a low value (0.05), in effect, increases the probability of committing a Type II error, which can lead to disastrous practical consequences. In the past decade, multi-model inference using information-theoretic or Bayesian approaches have been offered as better alternatives. These techniques allow comparing a series of models on equal grounds. Using these approaches, it is unnecessary to select a single "best" model. Rather, the parameters needed for decision making can be averaged across all models, weighted according to the support accorded each model. Here, I present a hypothetical example of animal counts that suggest a possible population decline, and analyze the data using hypothesis testing and an information-theoretic approach. A comparison between the two approaches highlights the shortcomings of hypothesis testing and advantages of multi-model inference.
Bibliography:http://dx.doi.org/10.1560%2FIJEE.57.4.309
ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1565-9801
1565-9801
DOI:10.1560/IJEE.57.4.309