The MLE is minimax optimal for LGC

We revisit the recently introduced Local Glivenko-Cantelli setting, which studies distribution-dependent uniform convegence rates of the Maximum Likelihood Estimator (MLE). In this work, we investigate generalizations of this setting where arbitrary estimators are allowed rather than just the MLE. C...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Cohen, Doron, Kontorovich, Aryeh, Weiss, Roi
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 02.10.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:We revisit the recently introduced Local Glivenko-Cantelli setting, which studies distribution-dependent uniform convegence rates of the Maximum Likelihood Estimator (MLE). In this work, we investigate generalizations of this setting where arbitrary estimators are allowed rather than just the MLE. Can a strictly larger class of measures be learned? Can better risk decay rates be obtained? We provide exhaustive answers to these questions -- which are both negative, provided the learner is barred from exploiting some infinite-dimensional pathologies. On the other hand, allowing such exploits does lead to a strictly larger class of learnable measures.
ISSN:2331-8422