Jo-SRC: A Contrastive Approach for Combating Noisy Labels

Due to the memorization effect in Deep Neural Networks (DNNs), training with noisy labels usually results in inferior model performance. Existing state-of-the-art methods primarily adopt a sample selection strategy, which selects small-loss samples for subsequent training. However, prior literature...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Yao, Yazhou, Zeren Sun, Zhang, Chuanyi, Shen, Fumin, Wu, Qi, Zhang, Jian, Tang, Zhenmin
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 24.03.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Due to the memorization effect in Deep Neural Networks (DNNs), training with noisy labels usually results in inferior model performance. Existing state-of-the-art methods primarily adopt a sample selection strategy, which selects small-loss samples for subsequent training. However, prior literature tends to perform sample selection within each mini-batch, neglecting the imbalance of noise ratios in different mini-batches. Moreover, valuable knowledge within high-loss samples is wasted. To this end, we propose a noise-robust approach named Jo-SRC (Joint Sample Selection and Model Regularization based on Consistency). Specifically, we train the network in a contrastive learning manner. Predictions from two different views of each sample are used to estimate its "likelihood" of being clean or out-of-distribution. Furthermore, we propose a joint loss to advance the model generalization performance by introducing consistency regularization. Extensive experiments have validated the superiority of our approach over existing state-of-the-art methods.
ISSN:2331-8422