Characterizing Implicit Bias in Terms of Optimization Geometry

We study the implicit bias of generic optimization methods, such as mirror descent, natural gradient descent, and steepest descent with respect to different potentials and norms, when optimizing underdetermined linear regression or separable linear classification problems. We explore the question of...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Gunasekar, Suriya, Lee, Jason, Soudry, Daniel, Srebro, Nathan
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 22.06.2020
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:We study the implicit bias of generic optimization methods, such as mirror descent, natural gradient descent, and steepest descent with respect to different potentials and norms, when optimizing underdetermined linear regression or separable linear classification problems. We explore the question of whether the specific global minimum (among the many possible global minima) reached by an algorithm can be characterized in terms of the potential or norm of the optimization geometry, and independently of hyperparameter choices such as step-size and momentum.
ISSN:2331-8422