Robust Sampling in Deep Learning

Deep learning requires regularization mechanisms to reduce overfitting and improve generalization. We address this problem by a new regularization method based on distributional robust optimization. The key idea is to modify the contribution from each sample for tightening the empirical risk bound....

Full description

Saved in:
Bibliographic Details
Main Authors Aguilera, Aurora Cobo, Artés-Rodríguez, Antonio, Pérez-Cruz, Fernando, Olmos, Pablo Martínez
Format Journal Article
LanguageEnglish
Published 04.06.2020
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Deep learning requires regularization mechanisms to reduce overfitting and improve generalization. We address this problem by a new regularization method based on distributional robust optimization. The key idea is to modify the contribution from each sample for tightening the empirical risk bound. During the stochastic training, the selection of samples is done according to their accuracy in such a way that the worst performed samples are the ones that contribute the most in the optimization. We study different scenarios and show the ones where it can make the convergence faster or increase the accuracy.
DOI:10.48550/arxiv.2006.02734