Learning sample representativeness for class-imbalanced multi-label classification

Class imbalance is a common problem that often occurs in multi-label image classification. In multi-label datasets, the co-occurrence of labels presents a unique set of difficulties, making it hard for traditional methods to produce satisfactory results, particularly on tail classes. Based on previo...

Full description

Saved in:
Bibliographic Details
Published inPattern analysis and applications : PAA Vol. 27; no. 2
Main Authors Zhang, Yu, Cao, Sichen, Mi, Siya, Bian, Yali
Format Journal Article
LanguageEnglish
Published London Springer London 01.06.2024
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Class imbalance is a common problem that often occurs in multi-label image classification. In multi-label datasets, the co-occurrence of labels presents a unique set of difficulties, making it hard for traditional methods to produce satisfactory results, particularly on tail classes. Based on previous research and our investigation, we have found that the number of labels presents in a given sample can influence classification results. Nevertheless, it is worth noting that certain samples within the tail classes exhibit resistance to this influence, which is a critical aspect in the context of class-imbalanced multi-label classification. In this paper, we term these samples as representative samples. Highlighting representative samples during training can effectively address the above issues. Specifically, we propose a new method to learn sample representativeness, which is named Representativeness-Emphasizing Loss (REL). First, we use a new re-weighting form to rebalance the weights based on sample representativeness. Then, a modified focal loss dynamically assigns tailored parameters for each class in each sample to further emphasize the sample representativeness. Extensive experiments on two class-imbalanced datasets show that models trained with this new loss function achieve comparable performance to existing methods.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1433-7541
1433-755X
DOI:10.1007/s10044-024-01209-8