An Exploration of Multicalibration Uniform Convergence Bounds

Recent works have investigated the sample complexity necessary for fair machine learning. The most advanced of such sample complexity bounds are developed by analyzing multicalibration uniform convergence for a given predictor class. We present a framework which yields multicalibration error uniform...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Rosenberg, Harrison, Bhattacharjee, Robi, Kassem Fawaz, Jha, Somesh
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 09.02.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Recent works have investigated the sample complexity necessary for fair machine learning. The most advanced of such sample complexity bounds are developed by analyzing multicalibration uniform convergence for a given predictor class. We present a framework which yields multicalibration error uniform convergence bounds by reparametrizing sample complexities for Empirical Risk Minimization (ERM) learning. From this framework, we demonstrate that multicalibration error exhibits dependence on the classifier architecture as well as the underlying data distribution. We perform an experimental evaluation to investigate the behavior of multicalibration error for different families of classifiers. We compare the results of this evaluation to multicalibration error concentration bounds. Our investigation provides additional perspective on both algorithmic fairness and multicalibration error convergence bounds. Given the prevalence of ERM sample complexity bounds, our proposed framework enables machine learning practitioners to easily understand the convergence behavior of multicalibration error for a myriad of classifier architectures.
ISSN:2331-8422