Label Distribution Learning with Label Correlations on Local Samples

Label distribution learning (LDL) is proposed for solving the label ambiguity problem in recent years, which can be seen as an extension of multi-label learning. To improve the performance of label distribution learning, some existing algorithms exploit label correlations in a global manner that ass...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on knowledge and data engineering Vol. 33; no. 4; pp. 1619 - 1631
Main Authors Jia, Xiuyi, Li, Zechao, Zheng, Xiang, Li, Weiwei, Huang, Sheng-Jun
Format Journal Article
LanguageEnglish
Published New York IEEE 01.04.2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Label distribution learning (LDL) is proposed for solving the label ambiguity problem in recent years, which can be seen as an extension of multi-label learning. To improve the performance of label distribution learning, some existing algorithms exploit label correlations in a global manner that assumes the label correlations are shared by all instances. However, the instances in different groups may share different label correlations, and few label correlations are globally applicable in real-world tasks. In this paper, two novel label distribution learning algorithms are proposed by exploiting label correlations on local samples, which are called GD-LDL-SCL and Adam-LDL-SCL, respectively. To utilize the label correlations on local samples, the influence of local samples is encoded, and a local correlation vector is designed as the additional features for each instance, which is based on the different clustered local samples. Then, the label distribution for an unseen instance can be predicted by exploiting the original features and the additional features simultaneously. Extensive experiments on some real-world data sets validate that our proposed methods can address the label distribution problems effectively and outperform state-of-the-art methods.
ISSN:1041-4347
1558-2191
DOI:10.1109/TKDE.2019.2943337