Bayesian Ensembles of Binary-Event Forecasts: When Is It Appropriate to Extremize or Anti-Extremize?
Many organizations face critical decisions that rely on forecasts of binary events. In these situations, organizations often gather forecasts from multiple experts or models and average those forecasts to produce a single aggregate forecast. Because the average forecast is known to be underconfident...
Saved in:
Main Authors | , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
05.05.2017
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Many organizations face critical decisions that rely on forecasts of binary
events. In these situations, organizations often gather forecasts from multiple
experts or models and average those forecasts to produce a single aggregate
forecast. Because the average forecast is known to be underconfident, methods
have been proposed that create an aggregate forecast more extreme than the
average forecast. But is it always appropriate to extremize the average
forecast? And if not, when is it appropriate to anti-extremize (i.e., to make
the aggregate forecast less extreme)? To answer these questions, we introduce a
class of optimal aggregators. These aggregators are Bayesian ensembles because
they follow from a Bayesian model of the underlying information experts have.
Each ensemble is a generalized additive model of experts' probabilities that
first transforms the experts' probabilities into their corresponding
information states, then linearly combines these information states, and
finally transforms the combined information states back into the probability
space. Analytically, we find that these optimal aggregators do not always
extremize the average forecast, and when they do, they can run counter to
existing methods. On two publicly available datasets, we demonstrate that these
new ensembles are easily fit to real forecast data and are more accurate than
existing methods. |
---|---|
DOI: | 10.48550/arxiv.1705.02391 |