Three-teaching: A three-way decision framework to handle noisy labels

Learning with noisy labels represents a prevalent weakly supervised learning paradigm. Uncertain knowledge resulting from noisy labels poses significant challenges for knowledge analysis. Given the memorization effect observed in deep neural networks, training on instances with minimal loss holds pr...

Full description

Saved in:
Bibliographic Details
Published inApplied soft computing Vol. 154; p. 111400
Main Authors Chao, Guoqing, Zhang, Kaiwen, Wang, Xiru, Chu, Dianhui
Format Journal Article
LanguageEnglish
Published Elsevier B.V 01.03.2024
Subjects
Online AccessGet full text
ISSN1568-4946
1872-9681
DOI10.1016/j.asoc.2024.111400

Cover

Abstract Learning with noisy labels represents a prevalent weakly supervised learning paradigm. Uncertain knowledge resulting from noisy labels poses significant challenges for knowledge analysis. Given the memorization effect observed in deep neural networks, training on instances with minimal loss holds promise for effectively handling noisy labels. “Co-teaching”, which is the state-of-the-art training method in this field, is characterized by the simultaneous training of two deep neural networks using instances with low loss. While this approach has demonstrated promising performance, its effectiveness heavily relies on the predictive capabilities of two neural networks. If these networks fail to provide reliable predictions, the overall learning performance may be unsatisfactory. In order to solve this problem and inspired by three-way decision, we propose a powerful learning paradigm named “Three-teaching”, which employs the “voting mechanism” to guarantee the prediction quality incrementally. In this approach, both neural networks make predictions for all the data. However, only the data that exhibits consistent prediction results and has a low loss is retained to feed into the third neural network for updating its parameters. The learning process will proceed by alternating these three neural networks’ roles. The experimental results obtained from benchmark datasets illustrate that “Three-teaching” surpasses numerous state-of-the-art methods. •We proposed a three-teaching model which introduces another neural network and “voting mechanism” to guarantee the prediction quality while facing noisy labels.•Three-teaching performs better than co-teaching and other compared methods on several real-world datasets.•Three-teaching provides a new way to implement three-way decision.
AbstractList Learning with noisy labels represents a prevalent weakly supervised learning paradigm. Uncertain knowledge resulting from noisy labels poses significant challenges for knowledge analysis. Given the memorization effect observed in deep neural networks, training on instances with minimal loss holds promise for effectively handling noisy labels. “Co-teaching”, which is the state-of-the-art training method in this field, is characterized by the simultaneous training of two deep neural networks using instances with low loss. While this approach has demonstrated promising performance, its effectiveness heavily relies on the predictive capabilities of two neural networks. If these networks fail to provide reliable predictions, the overall learning performance may be unsatisfactory. In order to solve this problem and inspired by three-way decision, we propose a powerful learning paradigm named “Three-teaching”, which employs the “voting mechanism” to guarantee the prediction quality incrementally. In this approach, both neural networks make predictions for all the data. However, only the data that exhibits consistent prediction results and has a low loss is retained to feed into the third neural network for updating its parameters. The learning process will proceed by alternating these three neural networks’ roles. The experimental results obtained from benchmark datasets illustrate that “Three-teaching” surpasses numerous state-of-the-art methods. •We proposed a three-teaching model which introduces another neural network and “voting mechanism” to guarantee the prediction quality while facing noisy labels.•Three-teaching performs better than co-teaching and other compared methods on several real-world datasets.•Three-teaching provides a new way to implement three-way decision.
ArticleNumber 111400
Author Chu, Dianhui
Wang, Xiru
Chao, Guoqing
Zhang, Kaiwen
Author_xml – sequence: 1
  givenname: Guoqing
  orcidid: 0000-0002-2410-650X
  surname: Chao
  fullname: Chao, Guoqing
  email: guoqingchao@hit.edu.cn
– sequence: 2
  givenname: Kaiwen
  surname: Zhang
  fullname: Zhang, Kaiwen
– sequence: 3
  givenname: Xiru
  orcidid: 0009-0005-7521-3419
  surname: Wang
  fullname: Wang, Xiru
– sequence: 4
  givenname: Dianhui
  surname: Chu
  fullname: Chu, Dianhui
  email: chudh@hit.edu.cn
BookMark eNp9kE1LAzEQhoMo2Fb_gKf8gV2TTZpuxEsp9QMKXuo5zCazNnW7kSRY-u_dWk8eeprhhedl5hmTyz70SMgdZyVnXN1vS0jBlhWrZMk5l4xdkBGvZ1WhVc0vh32q6kJqqa7JOKUtGyBd1SOyXG8iYpER7Mb3Hw90TvNvsocDdWh98qGnbYQd7kP8pDnQDfSuQ9oHnw60gwa7dEOuWugS3v7NCXl_Wq4XL8Xq7fl1MV8VVjCWC2gVqziIpuU4s0xJoR3nWiAwp1XbSA2NPoYwVaKx1jFhpXDAZTtF7ioxIfWp18aQUsTWWJ8hDyfmCL4znJmjDrM1Rx3mqMOcdAxo9Q_9in4H8XAeejxBw4_47TGaZD32Fp2PaLNxwZ_DfwCa63tv
CitedBy_id crossref_primary_10_1007_s10489_024_05798_z
crossref_primary_10_1007_s10489_024_05738_x
crossref_primary_10_1016_j_asoc_2025_112712
crossref_primary_10_1016_j_ijar_2024_109268
Cites_doi 10.1609/aaai.v30i1.10191
10.1080/0952813X.2020.1806519
10.1145/1390156.1390278
10.1145/3446776
10.1609/aaai.v31i1.10894
10.3390/make1010020
10.1007/s10489-022-04242-4
10.1109/TPAMI.2015.2456899
10.24963/ijcai.2019/595
10.1145/279943.279962
10.1007/s10994-013-5412-1
10.1109/ICCV.2019.00041
10.1109/MSP.2012.2211477
10.1109/ICIP.2015.7351698
10.1109/CVPR.2018.00582
10.1109/TKDE.2005.186
10.1109/CVPR.2017.240
10.3233/IDA-2001-5605
10.1109/CVPR52729.2023.02305
10.1109/CVPR.2019.00718
10.1016/j.eswa.2016.09.003
ContentType Journal Article
Copyright 2024 Elsevier B.V.
Copyright_xml – notice: 2024 Elsevier B.V.
DBID AAYXX
CITATION
DOI 10.1016/j.asoc.2024.111400
DatabaseName CrossRef
DatabaseTitle CrossRef
DatabaseTitleList
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISSN 1872-9681
ExternalDocumentID 10_1016_j_asoc_2024_111400
S1568494624001741
GroupedDBID --K
--M
.DC
.~1
0R~
1B1
1~.
1~5
23M
4.4
457
4G.
53G
5GY
5VS
6J9
7-5
71M
8P~
AABNK
AACTN
AAEDT
AAEDW
AAIAV
AAIKJ
AAKOC
AALRI
AAOAW
AAQFI
AAQXK
AAXUO
AAYFN
ABBOA
ABFNM
ABFRF
ABJNI
ABMAC
ABMYL
ABXDB
ABYKQ
ACDAQ
ACGFO
ACGFS
ACNNM
ACRLP
ACZNC
ADBBV
ADEZE
ADJOM
ADMUD
ADTZH
AEBSH
AECPX
AEFWE
AEKER
AENEX
AFKWA
AFTJW
AGHFR
AGUBO
AGYEJ
AHJVU
AHZHX
AIALX
AIEXJ
AIKHN
AITUG
AJBFU
AJOXV
AKRWK
ALMA_UNASSIGNED_HOLDINGS
AMFUW
AMRAJ
AOUOD
ASPBG
AVWKF
AXJTR
AZFZN
BJAXD
BKOJK
BLXMC
CS3
EBS
EFJIC
EFLBG
EJD
EO8
EO9
EP2
EP3
F5P
FDB
FEDTE
FGOYB
FIRID
FNPLU
FYGXN
G-Q
GBLVA
GBOLZ
HVGLF
HZ~
IHE
J1W
JJJVA
KOM
M41
MO0
N9A
O-L
O9-
OAUVE
OZT
P-8
P-9
P2P
PC.
Q38
R2-
RIG
ROL
RPZ
SDF
SDG
SES
SEW
SPC
SPCBC
SST
SSV
SSZ
T5K
UHS
UNMZH
~G-
AATTM
AAXKI
AAYWO
AAYXX
ABWVN
ACRPL
ACVFH
ADCNI
ADNMO
AEIPS
AEUPX
AFJKZ
AFPUW
AFXIZ
AGCQF
AGQPQ
AGRNS
AIGII
AIIUN
AKBMS
AKYEP
ANKPU
APXCP
BNPGV
CITATION
SSH
ID FETCH-LOGICAL-c300t-af6021a3bf1e7c06439d1193ea0d96fb49ab9439da563bccd03c43da14f5e1d23
IEDL.DBID AIKHN
ISSN 1568-4946
IngestDate Tue Jul 01 01:50:24 EDT 2025
Thu Apr 24 23:04:34 EDT 2025
Sat Mar 23 16:41:49 EDT 2024
IsPeerReviewed true
IsScholarly true
Keywords Noisy labels
Deep neural network
Three-teaching
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c300t-af6021a3bf1e7c06439d1193ea0d96fb49ab9439da563bccd03c43da14f5e1d23
ORCID 0009-0005-7521-3419
0000-0002-2410-650X
ParticipantIDs crossref_citationtrail_10_1016_j_asoc_2024_111400
crossref_primary_10_1016_j_asoc_2024_111400
elsevier_sciencedirect_doi_10_1016_j_asoc_2024_111400
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate March 2024
2024-03-00
PublicationDateYYYYMMDD 2024-03-01
PublicationDate_xml – month: 03
  year: 2024
  text: March 2024
PublicationDecade 2020
PublicationTitle Applied soft computing
PublicationYear 2024
Publisher Elsevier B.V
Publisher_xml – name: Elsevier B.V
References Hinton, Vinyals, Dean (b38) 2015
Yin, Zhang (b41) 2023; 53
K. Yi, J. Wu, Probabilistic end-to-end noise correction for learning with noisy labels, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 7017–7025.
H. Song, M. Kim, J.-G. Lee, Selfie: Refurbishing unclean samples for robust deep learning, in: International Conference on Machine Learning, 2019, pp. 5907–5915.
Y. Li, H. Han, S. Shan, X. Chen, DISC: Learning from noisy labels via dynamic instance-specific selection and correction, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 24070–24079.
H. Zhang, M. Cisse, Y.N. Dauphin, D. Lopez-Paz, Mixup: Beyond empirical risk minimization, in: International Conference on Learning Representations, 2018.
C. Gongt, D. Tao, J. Yang, W. Liu, Teaching-to-Learn and learning-to-Teach for multi-Label propagation, in: Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, 2016, pp. 1610–1616.
D. Silver, R.S. Sutton, M. Müller, Sample-based learning and search with permanent and transient memories, in: International Conference on Machine Learning, 2008, pp. 968–975.
Y. Wang, X. Ma, Z. Chen, Y. Luo, J. Yi, J. Bailey, Symmetric cross entropy for robust learning with noisy labels, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 322–330.
T. Miayto, A.M. Dai, I. Goodfellow, Virtual adversarial training for semi-supervised text classification, in: International Conference on Learning Representations, 2016.
Han, Yao, Liu, Niu, Tsang, Kwok, Sugiyama (b1) 2020
Freund, Schapire, Abe (b14) 1999; 14
W. Feng, S. Boukir, Class noise removal and correction for image classification using ensemble margin, in: 2015 IEEE International Conference on Image Processing, ICIP, 2015, pp. 4698–4702.
Xu, Jiang, Li (b36) 2021; 33
B. Han, Q. Yao, X. Yu, G. Niu, M. Xu, W. Hu, I. Tsang, M. Sugiyama, Co-teaching: Robust training of deep neural networks with extremely noisy labels, in: Advances in Neural Information Processing Systems, 2018, pp. 8535–8545.
A. Fawzi, S.-M. Moosavi-Dezfooli, P. Frossard, Robustness of classifiers: from adversarial to random noise, in: Advances in Neural Information Processing Systems, 2016, pp. 1632–1640.
D. Tanaka, D. Ikami, T. Yamasaki, K. Aizawa, Joint optimization framework for learning with noisy labels, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 5552–5560.
Zeng, Martinez (b35) 2001; 5
Goodfellow, Shlens, Szegedy (b15) 2014
S. Laine, T. Aila, Temporal ensembling for semi-Supervised learning, in: International Conference on Learning Representations, 2016.
X. Yu, B. Han, J. Yao, G. Niu, I. Tsang, M. Sugiyama, How does disagreement help generalization against label corruption?, in: International Conference on Machine Learning, 2019, pp. 7164–7173.
Liu, Tao (b27) 2014; 38
Deng (b45) 2012; 29
Y. Liu, H. Cheng, K. Zhang, Identifiability of label noise transition matrix, in: International Conference on Machine Learning, 2023, pp. 21475–21496.
D. Arpit, N. Ballas, D. Krueger, E. Bengio, M.S. Kanwal, T. Maharaj, A. Fischer, A. Courville, Y. Bengio, et al., A closer look at memorization in deep networks, in: International Conference on Machine Learning, 2017, pp. 233–242.
L. Jiang, Z. Zhou, T. Leung, L.-J. Li, L. Fei-Fei, Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels, in: International Conference on Machine Learning, 2018, pp. 2304–2313.
A. Ghosh, H. Kumar, P.S. Sastry, Robust loss functions under label noise for deep neural networks, in: Proceedings of the AAAI Conference on Artificial Intelligence, 2017, pp. 1919–1925.
Y. Fan, F. Tian, T. Qin, X.-Y. Li, T.-Y. Liu, Learning to teach, in: International Conference on Learning Representations, 2018.
E. Malach, S. Shalev-Shwartz, Decoupling “when to update” from “how to update”, in: Advances in Neural Information Processing Systems, 2017, pp. 960–970.
Zhou, Li (b43) 2005; 17
T. Sanderson, C. Scott, Class proportion estimation with application to multiclass anomaly rejection, in: Artificial Intelligence and Statistics, 2014, pp. 850–858.
N. Natarajan, I.S. Dhillon, P.K. Ravikumar, A. Tewari, Learning with noisy labels, in: Advances in Neural Information Processing Systems, 2013, pp. 1196–1204.
Raykar, Yu, Zhao, Valadez, Florin, Bogoni, Moy (b29) 2010; 11
Yan, Rosales, Fung, Ramanathan, Dy (b30) 2014; 95
Y. Yao, T. Liu, B. Han, M. Gong, J. Deng, G. Niu, M. Sugiyama, Dual t: Reducing estimation error for transition matrix in label-noise learning, in: Advances in Neural Information Processing Systems, 2020, pp. 7260–7271.
Zhang, Bengio, Hardt, Recht, Vinyals (b42) 2021; 64
Z. Zhang, M. Sabuncu, Generalized cross entropy loss for training deep neural networks with noisy labels, in: Advances in Neural Information Processing Systems, 2018, pp. 8778–8788.
Wang, Kodirov, Hua, Robertson (b20) 2019
A. Menon, B. Van Rooyen, C.S. Ong, B. Williamson, Learning from corrupted binary labels via class-probability estimation, in: International Conference on Machine Learning, 2015, pp. 125–134.
G. Patrini, A. Rozza, A. Krishna Menon, R. Nock, L. Qu, Making deep neural networks robust to label noise: A loss correction approach, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 1944–1952.
B. Han, J. Yao, G. Niu, M. Zhou, I. Tsang, Y. Zhang, M. Sugiyama, Masking: A new perspective of noisy supervision, in: Advances in Neural Information Processing Systems, 2018, pp. 5836–5846.
H. Wei, H. Zhuang, R. Xie, L. Feng, G. Niu, B. An, Y. Li, Mitigating memorization of noisy labels by clipping the model prediction, in: International Conference on Machine Learning, 2023.
J. Zhang, B. Han, L. Wynter, B.K.H. Low, M. Kankanhalli, Towards robust resNet: A small step but a giant leap, in: International Joint Conference on Artificial Intelligence, 2019, pp. 4285–4291.
Nicholson, Sheng, Zhang (b33) 2016; 66
A. Blum, T. Mitchell, Combining labeled and unlabeled data with co-training, in: Proceedings of the Eleventh Annual Conference on Computational Learning Theory, 1998, pp. 92–100.
Chao, Luo, Ding (b18) 2019; 1
D. Hendrycks, M. Mazeika, D. Wilson, K. Gimpel, Using trusted data to train deep networks on labels corrupted by severe noise, in: Advances in Neural Information Processing Systems, 2018, pp. 10456–10465.
H. Masnadi-Shirazi, N. Vasconcelos, On the design of loss functions for classification: theory, robustness to outliers, and savageboost, in: Advances in Neural Information Processing Systems, 2008, pp. 1049–1056.
Xu (10.1016/j.asoc.2024.111400_b36) 2021; 33
10.1016/j.asoc.2024.111400_b26
10.1016/j.asoc.2024.111400_b48
Han (10.1016/j.asoc.2024.111400_b1) 2020
10.1016/j.asoc.2024.111400_b28
Freund (10.1016/j.asoc.2024.111400_b14) 1999; 14
10.1016/j.asoc.2024.111400_b6
10.1016/j.asoc.2024.111400_b22
10.1016/j.asoc.2024.111400_b44
10.1016/j.asoc.2024.111400_b7
Goodfellow (10.1016/j.asoc.2024.111400_b15) 2014
10.1016/j.asoc.2024.111400_b23
10.1016/j.asoc.2024.111400_b4
10.1016/j.asoc.2024.111400_b24
10.1016/j.asoc.2024.111400_b46
10.1016/j.asoc.2024.111400_b5
10.1016/j.asoc.2024.111400_b25
10.1016/j.asoc.2024.111400_b47
10.1016/j.asoc.2024.111400_b40
10.1016/j.asoc.2024.111400_b8
10.1016/j.asoc.2024.111400_b9
10.1016/j.asoc.2024.111400_b21
Hinton (10.1016/j.asoc.2024.111400_b38) 2015
Yan (10.1016/j.asoc.2024.111400_b30) 2014; 95
10.1016/j.asoc.2024.111400_b2
10.1016/j.asoc.2024.111400_b3
Yin (10.1016/j.asoc.2024.111400_b41) 2023; 53
Zhou (10.1016/j.asoc.2024.111400_b43) 2005; 17
Zhang (10.1016/j.asoc.2024.111400_b42) 2021; 64
10.1016/j.asoc.2024.111400_b19
Deng (10.1016/j.asoc.2024.111400_b45) 2012; 29
10.1016/j.asoc.2024.111400_b37
10.1016/j.asoc.2024.111400_b16
10.1016/j.asoc.2024.111400_b17
Wang (10.1016/j.asoc.2024.111400_b20) 2019
Liu (10.1016/j.asoc.2024.111400_b27) 2014; 38
Zeng (10.1016/j.asoc.2024.111400_b35) 2001; 5
10.1016/j.asoc.2024.111400_b39
10.1016/j.asoc.2024.111400_b11
10.1016/j.asoc.2024.111400_b12
10.1016/j.asoc.2024.111400_b34
10.1016/j.asoc.2024.111400_b13
Nicholson (10.1016/j.asoc.2024.111400_b33) 2016; 66
10.1016/j.asoc.2024.111400_b31
10.1016/j.asoc.2024.111400_b10
Raykar (10.1016/j.asoc.2024.111400_b29) 2010; 11
10.1016/j.asoc.2024.111400_b32
Chao (10.1016/j.asoc.2024.111400_b18) 2019; 1
References_xml – year: 2019
  ident: b20
  article-title: Improving mae against cce under label noise
– volume: 29
  start-page: 141
  year: 2012
  end-page: 142
  ident: b45
  article-title: The mnist database of handwritten digit images for machine learning research
  publication-title: IEEE Signal Process. Mag.
– reference: E. Malach, S. Shalev-Shwartz, Decoupling “when to update” from “how to update”, in: Advances in Neural Information Processing Systems, 2017, pp. 960–970.
– volume: 95
  start-page: 291
  year: 2014
  end-page: 327
  ident: b30
  article-title: Learning from multiple annotators with varying expertise
  publication-title: Mach. Learn.
– reference: D. Arpit, N. Ballas, D. Krueger, E. Bengio, M.S. Kanwal, T. Maharaj, A. Fischer, A. Courville, Y. Bengio, et al., A closer look at memorization in deep networks, in: International Conference on Machine Learning, 2017, pp. 233–242.
– reference: A. Fawzi, S.-M. Moosavi-Dezfooli, P. Frossard, Robustness of classifiers: from adversarial to random noise, in: Advances in Neural Information Processing Systems, 2016, pp. 1632–1640.
– reference: J. Zhang, B. Han, L. Wynter, B.K.H. Low, M. Kankanhalli, Towards robust resNet: A small step but a giant leap, in: International Joint Conference on Artificial Intelligence, 2019, pp. 4285–4291.
– reference: H. Song, M. Kim, J.-G. Lee, Selfie: Refurbishing unclean samples for robust deep learning, in: International Conference on Machine Learning, 2019, pp. 5907–5915.
– year: 2020
  ident: b1
  article-title: A survey of label-noise representation learning: Past, present and future
– reference: Y. Li, H. Han, S. Shan, X. Chen, DISC: Learning from noisy labels via dynamic instance-specific selection and correction, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 24070–24079.
– volume: 14
  start-page: 1612
  year: 1999
  ident: b14
  article-title: A short introduction to boosting
  publication-title: J.-Japan. Soc. Artif. Intell.
– reference: B. Han, Q. Yao, X. Yu, G. Niu, M. Xu, W. Hu, I. Tsang, M. Sugiyama, Co-teaching: Robust training of deep neural networks with extremely noisy labels, in: Advances in Neural Information Processing Systems, 2018, pp. 8535–8545.
– reference: Y. Liu, H. Cheng, K. Zhang, Identifiability of label noise transition matrix, in: International Conference on Machine Learning, 2023, pp. 21475–21496.
– reference: C. Gongt, D. Tao, J. Yang, W. Liu, Teaching-to-Learn and learning-to-Teach for multi-Label propagation, in: Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, 2016, pp. 1610–1616.
– reference: Y. Wang, X. Ma, Z. Chen, Y. Luo, J. Yi, J. Bailey, Symmetric cross entropy for robust learning with noisy labels, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 322–330.
– reference: T. Sanderson, C. Scott, Class proportion estimation with application to multiclass anomaly rejection, in: Artificial Intelligence and Statistics, 2014, pp. 850–858.
– volume: 11
  start-page: 1297
  year: 2010
  end-page: 1322
  ident: b29
  article-title: Learning from crowds
  publication-title: J. Mach. Learn. Res.
– reference: H. Masnadi-Shirazi, N. Vasconcelos, On the design of loss functions for classification: theory, robustness to outliers, and savageboost, in: Advances in Neural Information Processing Systems, 2008, pp. 1049–1056.
– year: 2015
  ident: b38
  article-title: Distilling the knowledge in a neural network
– reference: Z. Zhang, M. Sabuncu, Generalized cross entropy loss for training deep neural networks with noisy labels, in: Advances in Neural Information Processing Systems, 2018, pp. 8778–8788.
– reference: T. Miayto, A.M. Dai, I. Goodfellow, Virtual adversarial training for semi-supervised text classification, in: International Conference on Learning Representations, 2016.
– reference: D. Silver, R.S. Sutton, M. Müller, Sample-based learning and search with permanent and transient memories, in: International Conference on Machine Learning, 2008, pp. 968–975.
– volume: 53
  start-page: 14703
  year: 2023
  end-page: 14716
  ident: b41
  article-title: Multi-view multi-label learning with double orders manifold preserving
  publication-title: Appl. Intell.
– reference: D. Tanaka, D. Ikami, T. Yamasaki, K. Aizawa, Joint optimization framework for learning with noisy labels, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 5552–5560.
– volume: 17
  start-page: 1529
  year: 2005
  end-page: 1541
  ident: b43
  article-title: Tri-training: exploiting unlabeled data using three classifiers
  publication-title: IEEE Trans. Knowl. Data Eng.
– volume: 64
  start-page: 107
  year: 2021
  end-page: 115
  ident: b42
  article-title: Understanding deep learning (still) requires rethinking generalization
  publication-title: Commun. ACM
– reference: X. Yu, B. Han, J. Yao, G. Niu, I. Tsang, M. Sugiyama, How does disagreement help generalization against label corruption?, in: International Conference on Machine Learning, 2019, pp. 7164–7173.
– volume: 66
  start-page: 149
  year: 2016
  end-page: 162
  ident: b33
  article-title: Label noise correction and application in crowdsourcing
  publication-title: Expert Syst. Appl.
– volume: 33
  start-page: 985
  year: 2021
  end-page: 999
  ident: b36
  article-title: Resampling-based noise correction for crowdsourcing
  publication-title: J. Exp. Theor. Artif. Intell.
– reference: D. Hendrycks, M. Mazeika, D. Wilson, K. Gimpel, Using trusted data to train deep networks on labels corrupted by severe noise, in: Advances in Neural Information Processing Systems, 2018, pp. 10456–10465.
– reference: B. Han, J. Yao, G. Niu, M. Zhou, I. Tsang, Y. Zhang, M. Sugiyama, Masking: A new perspective of noisy supervision, in: Advances in Neural Information Processing Systems, 2018, pp. 5836–5846.
– reference: A. Menon, B. Van Rooyen, C.S. Ong, B. Williamson, Learning from corrupted binary labels via class-probability estimation, in: International Conference on Machine Learning, 2015, pp. 125–134.
– volume: 5
  start-page: 491
  year: 2001
  end-page: 502
  ident: b35
  article-title: An algorithm for correcting mislabeled data
  publication-title: Intell. Data Anal.
– reference: S. Laine, T. Aila, Temporal ensembling for semi-Supervised learning, in: International Conference on Learning Representations, 2016.
– reference: Y. Yao, T. Liu, B. Han, M. Gong, J. Deng, G. Niu, M. Sugiyama, Dual t: Reducing estimation error for transition matrix in label-noise learning, in: Advances in Neural Information Processing Systems, 2020, pp. 7260–7271.
– year: 2014
  ident: b15
  article-title: Explaining and harnessing adversarial examples
– reference: W. Feng, S. Boukir, Class noise removal and correction for image classification using ensemble margin, in: 2015 IEEE International Conference on Image Processing, ICIP, 2015, pp. 4698–4702.
– reference: A. Blum, T. Mitchell, Combining labeled and unlabeled data with co-training, in: Proceedings of the Eleventh Annual Conference on Computational Learning Theory, 1998, pp. 92–100.
– reference: H. Zhang, M. Cisse, Y.N. Dauphin, D. Lopez-Paz, Mixup: Beyond empirical risk minimization, in: International Conference on Learning Representations, 2018.
– reference: K. Yi, J. Wu, Probabilistic end-to-end noise correction for learning with noisy labels, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 7017–7025.
– reference: N. Natarajan, I.S. Dhillon, P.K. Ravikumar, A. Tewari, Learning with noisy labels, in: Advances in Neural Information Processing Systems, 2013, pp. 1196–1204.
– reference: H. Wei, H. Zhuang, R. Xie, L. Feng, G. Niu, B. An, Y. Li, Mitigating memorization of noisy labels by clipping the model prediction, in: International Conference on Machine Learning, 2023.
– reference: L. Jiang, Z. Zhou, T. Leung, L.-J. Li, L. Fei-Fei, Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels, in: International Conference on Machine Learning, 2018, pp. 2304–2313.
– volume: 1
  start-page: 341
  year: 2019
  end-page: 358
  ident: b18
  article-title: Recent advances in supervised dimension reduction: A survey
  publication-title: Mach. Learn. Knowl. Extr.
– reference: A. Ghosh, H. Kumar, P.S. Sastry, Robust loss functions under label noise for deep neural networks, in: Proceedings of the AAAI Conference on Artificial Intelligence, 2017, pp. 1919–1925.
– reference: Y. Fan, F. Tian, T. Qin, X.-Y. Li, T.-Y. Liu, Learning to teach, in: International Conference on Learning Representations, 2018.
– reference: G. Patrini, A. Rozza, A. Krishna Menon, R. Nock, L. Qu, Making deep neural networks robust to label noise: A loss correction approach, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 1944–1952.
– volume: 38
  start-page: 447
  year: 2014
  end-page: 461
  ident: b27
  article-title: Classification with noisy labels by importance reweighting
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
– ident: 10.1016/j.asoc.2024.111400_b40
  doi: 10.1609/aaai.v30i1.10191
– year: 2020
  ident: 10.1016/j.asoc.2024.111400_b1
– volume: 33
  start-page: 985
  issue: 6
  year: 2021
  ident: 10.1016/j.asoc.2024.111400_b36
  article-title: Resampling-based noise correction for crowdsourcing
  publication-title: J. Exp. Theor. Artif. Intell.
  doi: 10.1080/0952813X.2020.1806519
– ident: 10.1016/j.asoc.2024.111400_b7
  doi: 10.1145/1390156.1390278
– ident: 10.1016/j.asoc.2024.111400_b10
– ident: 10.1016/j.asoc.2024.111400_b12
– volume: 64
  start-page: 107
  issue: 3
  year: 2021
  ident: 10.1016/j.asoc.2024.111400_b42
  article-title: Understanding deep learning (still) requires rethinking generalization
  publication-title: Commun. ACM
  doi: 10.1145/3446776
– ident: 10.1016/j.asoc.2024.111400_b48
– year: 2019
  ident: 10.1016/j.asoc.2024.111400_b20
– ident: 10.1016/j.asoc.2024.111400_b25
– ident: 10.1016/j.asoc.2024.111400_b9
– ident: 10.1016/j.asoc.2024.111400_b19
  doi: 10.1609/aaai.v31i1.10894
– ident: 10.1016/j.asoc.2024.111400_b5
– ident: 10.1016/j.asoc.2024.111400_b3
– volume: 1
  start-page: 341
  issue: 1
  year: 2019
  ident: 10.1016/j.asoc.2024.111400_b18
  article-title: Recent advances in supervised dimension reduction: A survey
  publication-title: Mach. Learn. Knowl. Extr.
  doi: 10.3390/make1010020
– volume: 53
  start-page: 14703
  issue: 12
  year: 2023
  ident: 10.1016/j.asoc.2024.111400_b41
  article-title: Multi-view multi-label learning with double orders manifold preserving
  publication-title: Appl. Intell.
  doi: 10.1007/s10489-022-04242-4
– volume: 38
  start-page: 447
  issue: 3
  year: 2014
  ident: 10.1016/j.asoc.2024.111400_b27
  article-title: Classification with noisy labels by importance reweighting
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2015.2456899
– ident: 10.1016/j.asoc.2024.111400_b17
  doi: 10.24963/ijcai.2019/595
– ident: 10.1016/j.asoc.2024.111400_b13
– ident: 10.1016/j.asoc.2024.111400_b44
  doi: 10.1145/279943.279962
– volume: 95
  start-page: 291
  issue: 3
  year: 2014
  ident: 10.1016/j.asoc.2024.111400_b30
  article-title: Learning from multiple annotators with varying expertise
  publication-title: Mach. Learn.
  doi: 10.1007/s10994-013-5412-1
– ident: 10.1016/j.asoc.2024.111400_b22
  doi: 10.1109/ICCV.2019.00041
– volume: 29
  start-page: 141
  issue: 6
  year: 2012
  ident: 10.1016/j.asoc.2024.111400_b45
  article-title: The mnist database of handwritten digit images for machine learning research
  publication-title: IEEE Signal Process. Mag.
  doi: 10.1109/MSP.2012.2211477
– ident: 10.1016/j.asoc.2024.111400_b34
  doi: 10.1109/ICIP.2015.7351698
– ident: 10.1016/j.asoc.2024.111400_b28
– ident: 10.1016/j.asoc.2024.111400_b11
– ident: 10.1016/j.asoc.2024.111400_b31
  doi: 10.1109/CVPR.2018.00582
– ident: 10.1016/j.asoc.2024.111400_b24
– ident: 10.1016/j.asoc.2024.111400_b6
– ident: 10.1016/j.asoc.2024.111400_b8
– ident: 10.1016/j.asoc.2024.111400_b26
– ident: 10.1016/j.asoc.2024.111400_b47
– volume: 17
  start-page: 1529
  issue: 11
  year: 2005
  ident: 10.1016/j.asoc.2024.111400_b43
  article-title: Tri-training: exploiting unlabeled data using three classifiers
  publication-title: IEEE Trans. Knowl. Data Eng.
  doi: 10.1109/TKDE.2005.186
– ident: 10.1016/j.asoc.2024.111400_b4
– ident: 10.1016/j.asoc.2024.111400_b46
  doi: 10.1109/CVPR.2017.240
– volume: 14
  start-page: 1612
  issue: 771–780
  year: 1999
  ident: 10.1016/j.asoc.2024.111400_b14
  article-title: A short introduction to boosting
  publication-title: J.-Japan. Soc. Artif. Intell.
– volume: 5
  start-page: 491
  issue: 6
  year: 2001
  ident: 10.1016/j.asoc.2024.111400_b35
  article-title: An algorithm for correcting mislabeled data
  publication-title: Intell. Data Anal.
  doi: 10.3233/IDA-2001-5605
– ident: 10.1016/j.asoc.2024.111400_b2
– ident: 10.1016/j.asoc.2024.111400_b37
  doi: 10.1109/CVPR52729.2023.02305
– volume: 11
  start-page: 1297
  issue: 43
  year: 2010
  ident: 10.1016/j.asoc.2024.111400_b29
  article-title: Learning from crowds
  publication-title: J. Mach. Learn. Res.
– year: 2014
  ident: 10.1016/j.asoc.2024.111400_b15
– ident: 10.1016/j.asoc.2024.111400_b16
– ident: 10.1016/j.asoc.2024.111400_b23
– ident: 10.1016/j.asoc.2024.111400_b32
  doi: 10.1109/CVPR.2019.00718
– ident: 10.1016/j.asoc.2024.111400_b21
– volume: 66
  start-page: 149
  year: 2016
  ident: 10.1016/j.asoc.2024.111400_b33
  article-title: Label noise correction and application in crowdsourcing
  publication-title: Expert Syst. Appl.
  doi: 10.1016/j.eswa.2016.09.003
– ident: 10.1016/j.asoc.2024.111400_b39
– year: 2015
  ident: 10.1016/j.asoc.2024.111400_b38
SSID ssj0016928
Score 2.4285185
Snippet Learning with noisy labels represents a prevalent weakly supervised learning paradigm. Uncertain knowledge resulting from noisy labels poses significant...
SourceID crossref
elsevier
SourceType Enrichment Source
Index Database
Publisher
StartPage 111400
SubjectTerms Deep neural network
Noisy labels
Three-teaching
Title Three-teaching: A three-way decision framework to handle noisy labels
URI https://dx.doi.org/10.1016/j.asoc.2024.111400
Volume 154
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1LSwMxEA61vXjxLdZHycGbxG66yW7XWykt9UERbaG3JU-oyLboivTib3eymy0K0oPHDZkQvkzmwX6ZQeiSxuC3464lNLExYULDlYolJQaSnoRrV8KyYFuMo9GU3c34rIb61VsYR6v0tr-06YW19iNtj2Z7OZ-3nyHz6LKERY4FCXE1pEANWD_iddTo3d6PxuufCVFStFh184kT8G9nSpqXABAgTewwZzyYe-j2l3_64XOGe2jHB4u4V-5nH9VMdoB2q0YM2N_LQzSYwIkYkntm5A3u4bwY-RQrrH0XHWwrHhbOF7isroCzxfx9hUETYAtHaDocTPoj4vsjEBUGQU6EjcBDi1BaamJVxBaaQkBmRKCTyEqWCJm4QcGjUCqlg1CxUAvKLDdUd8JjVM8WmTlBmEoIA6hmzCrFrDDdWFDJYWlONQcz0ES0QiVVvni462HxmlYssZfUIZk6JNMSySa6Wsssy9IZG2fzCuz0lwKkYNs3yJ3-U-4Mbbuvkk52jur524e5gPgily20df1FW6BF_aeHx5bXpm-3as66
linkProvider Elsevier
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1LSwMxEA5FD3rxLdZnDt4ktukmu11vpbRUrb3YQm8hT6jItuiK9OJvd7KbLQrSg9fZzBK-JPOAb2YQuqYJ-O2k7QhNXUKYNPCkEkWJhaQn5ca3sCzYFqN4MGEPUz6toW5VC-NplcH2lza9sNZB0ghoNhazWeMZMo82S1nsWZAQV0MKtMl4lHhe3-3XiudB47QYsOpXE788VM6UJC8JEECS2GLedDBf5vaXd_rhcfp7aCeEirhT7mYf1Wx2gHarMQw4vMpD1BvDeViSB17kHe7gvJB8yiU2YYYOdhULC-dzXPZWwNl89r7EcA9gC0do0u-NuwMSpiMQHTWbOZEuBv8sI-WoTXQRWRgK4ZiVTZPGTrFUqtQLJY8jpbVpRppFRlLmuKWmFR2jjWye2ROEqYIggBrGnNbMSdtOJFUcfs2p4WAE6ohWqAgdWof7CRavouKIvQiPpPBIihLJOrpZ6SzKxhlrV_MKbPHr-AVY9jV6p__Uu0Jbg_HTUAzvR49naNt_KYll52gjf_uwFxBp5OqyuEnf_WXN8A
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Three-teaching%3A+A+three-way+decision+framework+to+handle+noisy+labels&rft.jtitle=Applied+soft+computing&rft.au=Chao%2C+Guoqing&rft.au=Zhang%2C+Kaiwen&rft.au=Wang%2C+Xiru&rft.au=Chu%2C+Dianhui&rft.date=2024-03-01&rft.issn=1568-4946&rft.volume=154&rft.spage=111400&rft_id=info:doi/10.1016%2Fj.asoc.2024.111400&rft.externalDBID=n%2Fa&rft.externalDocID=10_1016_j_asoc_2024_111400
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1568-4946&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1568-4946&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1568-4946&client=summon