SADCMF: Self-Attentive Deep Consistent Matrix Factorization for Micro-Video Multi-Label Classification

Currently, there is a growing scholarly and industrial interest in micro-video-centric research. Within these domains, multi-label learning has emerged as a fundamental yet attractive subject. Existing methods primarily place emphasis on feature representations of individual micro-videos, while negl...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on multimedia Vol. 26; pp. 10331 - 10341
Main Authors Fan, Fugui, Jing, Peiguang, Nie, Liqiang, Gu, Haoyu, Su, Yuting
Format Journal Article
LanguageEnglish
Published IEEE 2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Currently, there is a growing scholarly and industrial interest in micro-video-centric research. Within these domains, multi-label learning has emerged as a fundamental yet attractive subject. Existing methods primarily place emphasis on feature representations of individual micro-videos, while neglecting latent interdependencies between instance and label domains. To address this problem, in this paper, we propose a novel self-attentive deep consistent matrix factorization (SADCMF) method, which jointly explores dual-domain hierarchical representations and their inherent dependencies for micro-video multi-label classification. Specifically, SADCMF includes three primary characteristics. 1) A dual-domain deep collaborative factorization module is developed to explore the first-stage representations of instance features and the discriminative embeddings of label semantics in a mutually beneficial manner. 2) A correlation-driven self-attentive factorization module is devised to acquire the label-aware attentive outputs, which are further combined with original features through a residual structure to enrich the second-stage feature representations. 3) A dual-stream representation consistency module ensures the unidirectional and bidirectional representation consistency, meanwhile, narrows the discrepancies between the two-stage representations for improving the generalization ability of our method. Extensive experiments conducted on two publicly available micro-video multi-label datasets demonstrate its superior performance in comparison with state-of-the-art methods.
ISSN:1520-9210
1941-0077
DOI:10.1109/TMM.2024.3406196