Robust Sampling in Deep Learning
Deep learning requires regularization mechanisms to reduce overfitting and improve generalization. We address this problem by a new regularization method based on distributional robust optimization. The key idea is to modify the contribution from each sample for tightening the empirical risk bound....
Saved in:
Main Authors | , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
04.06.2020
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Deep learning requires regularization mechanisms to reduce overfitting and
improve generalization. We address this problem by a new regularization method
based on distributional robust optimization. The key idea is to modify the
contribution from each sample for tightening the empirical risk bound. During
the stochastic training, the selection of samples is done according to their
accuracy in such a way that the worst performed samples are the ones that
contribute the most in the optimization. We study different scenarios and show
the ones where it can make the convergence faster or increase the accuracy. |
---|---|
DOI: | 10.48550/arxiv.2006.02734 |