Statistical minimax theorems via nonstandard analysis
For statistical decision problems with finite parameter space, it is well-known that the upper value (minimax value) agrees with the lower value (maximin value). Only under a generalized notion of prior does such an equivalence carry over to the case infinite parameter spaces, provided nature can pl...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
26.12.2022
|
Subjects | |
Online Access | Get full text |
DOI | 10.48550/arxiv.2212.13250 |
Cover
Loading…
Summary: | For statistical decision problems with finite parameter space, it is
well-known that the upper value (minimax value) agrees with the lower value
(maximin value). Only under a generalized notion of prior does such an
equivalence carry over to the case infinite parameter spaces, provided nature
can play a prior distribution and the statistician can play a randomized
strategy. Various such extensions of this classical result have been
established, but they are subject to technical conditions such as compactness
of the parameter space or continuity of the risk functions. Using nonstandard
analysis, we prove a minimax theorem for arbitrary statistical decision
problems. Informally, we show that for every statistical decision problem, the
standard upper value equals the lower value when the $\sup$ is taken over the
collection of all internal priors, which may assign infinitesimal probability
to (internal) events. Applying our nonstandard minimax theorem, we derive
several standard minimax theorems: a minimax theorem on compact parameter space
with continuous risk functions, a finitely additive minimax theorem with
bounded risk functions and a minimax theorem on totally bounded metric
parameter spaces with Lipschitz risk functions. |
---|---|
DOI: | 10.48550/arxiv.2212.13250 |