Analytical guidelines to increase the value of community science data An example using eBird data to estimate species distributions

Aim Ecological data collected by the general public are valuable for addressing a wide range of ecological research and conservation planning, and there has been a rapid increase in the scope and volume of data available. However, data from eBird or other large‐scale projects with volunteer observer...

Full description

Saved in:
Bibliographic Details
Published inDiversity & distributions Vol. 27; no. 7; pp. 1265 - 1277
Main Authors Johnston, Alison, Hochachka, Wesley M., Strimas-Mackey, Matthew E., Gutierrez, Viviana Ruiz, Robinson, Orin J., Miller, Eliot T., Auer, Tom, Kelling, Steve T., Fink, Daniel
Format Journal Article
LanguageEnglish
Published Oxford Wiley 01.07.2021
John Wiley & Sons, Inc
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Aim Ecological data collected by the general public are valuable for addressing a wide range of ecological research and conservation planning, and there has been a rapid increase in the scope and volume of data available. However, data from eBird or other large‐scale projects with volunteer observers typically present several challenges that can impede robust ecological inferences. These challenges include spatial bias, variation in effort and species reporting bias. Innovation We use the example of estimating species distributions with data from eBird, a community science or citizen science (CS) project. We estimate two widely used metrics of species distributions: encounter rate and occupancy probability. For each metric, we critically assess the impact of data processing steps that either degrade or refine the data used in the analyses. CS data density varies widely across the globe, so we also test whether differences in model performance are robust to sample size. Main conclusions Model performance improved when data processing and analytical methods addressed the challenges arising from CS data; however, the degree of improvement varied with species and data density. The largest gains we observed in model performance were achieved with 1) the use of complete checklists (where observers report all the species they detect and identify, allowing non‐detections to be inferred) and 2) the use of covariates describing variation in effort and detectability for each checklist. Occupancy models were more robust to a lack of complete checklists. Improvements in model performance with data refinement were more evident with larger sample sizes. In general, we found that the value of each refinement varied by situation and we encourage researchers to assess the benefits in other scenarios. These approaches will enable researchers to more effectively harness the vast ecological knowledge that exists within CS data for conservation and basic research.
ISSN:1366-9516
1472-4642
DOI:10.1111/ddi.13271