The Prior Can Often Only Be Understood in the Context of the Likelihood

A key sticking point of Bayesian analysis is the choice of prior distribution, and there is a vast literature on potential defaults including uniform priors, Jeffreys’ priors, reference priors, maximum entropy priors, and weakly informative priors. These methods, however, often manifest a key concep...

Full description

Saved in:
Bibliographic Details
Published inEntropy (Basel, Switzerland) Vol. 19; no. 10; p. 555
Main Authors Gelman, Andrew, Simpson, Daniel, Betancourt, Michael
Format Journal Article
LanguageEnglish
Published Basel MDPI AG 01.10.2017
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:A key sticking point of Bayesian analysis is the choice of prior distribution, and there is a vast literature on potential defaults including uniform priors, Jeffreys’ priors, reference priors, maximum entropy priors, and weakly informative priors. These methods, however, often manifest a key conceptual tension in prior modeling: a model encoding true prior information should be chosen without reference to the model of the measurement process, but almost all common prior modeling techniques are implicitly motivated by a reference likelihood. In this paper we resolve this apparent paradox by placing the choice of prior into the context of the entire Bayesian analysis, from inference to prediction to model evaluation.
ISSN:1099-4300
1099-4300
DOI:10.3390/e19100555