Anchor-guided global view reconstruction for multi-view multi-label feature selection

In multi-view multi-label learning (MVML), the accuracy of feature weights is pivotal for establishing feature order. However, conventional MVML methods often struggle with integrating distinct information from multiple views effectively, leading to unclear segmentation and potential noise introduct...

Full description

Saved in:
Bibliographic Details
Published inInformation sciences Vol. 679; p. 121124
Main Authors Hao, Pingting, Liu, Kunpeng, Gao, Wanfu
Format Journal Article
LanguageEnglish
Published Elsevier Inc 01.09.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In multi-view multi-label learning (MVML), the accuracy of feature weights is pivotal for establishing feature order. However, conventional MVML methods often struggle with integrating distinct information from multiple views effectively, leading to unclear segmentation and potential noise introduction. To address this challenge, this paper proposes an anchor-based latent representation method for global view learning in MVML. Specifically, we encode inherent information from each view to derive a candidate multi-view representation. Anchors extracted from both the candidate view and the global view are then constrained to approximate equality in the latent space. Furthermore, a carefully designed view matrix serves as supplement, seamlessly integrated into the reconstruction process to augment information. The convergence of results is subsequently validated using multiplicative update rules. Experimental findings showcase the superior performance of our proposed method across various multi-view datasets.
ISSN:0020-0255
1872-6291
DOI:10.1016/j.ins.2024.121124