Multiclass and Multilabel Classifications by Consensus and Complementarity-Based Multiview Latent Space Projection

The fusion of multiview data sets, in which features of each sample are categorized into distinct groups, is increasingly important in the big data era. Successful multiview learning approaches have mechanisms to enforce consensus and/or complementarity among views. This article introduces a framewo...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on systems, man, and cybernetics. Systems Vol. 54; no. 3; pp. 1705 - 1718
Main Authors Ma, Jianghong, Kou, Weixuan, Lin, Mingquan, Cho, Carmen C. M., Chiu, Bernard
Format Journal Article
LanguageEnglish
Published New York IEEE 01.03.2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The fusion of multiview data sets, in which features of each sample are categorized into distinct groups, is increasingly important in the big data era. Successful multiview learning approaches have mechanisms to enforce consensus and/or complementarity among views. This article introduces a framework called the consensus and complementarity-based multiview latent space projection (MVLSP-2C) that enforces both principles simultaneously. Consensus is established by extracting and representing information shared by all views in a shared latent space, whereas complementarity among views is achieved by the representation in view-specific spaces. As the diversity of the multiview feature representation benefits classification performance, MVLSP-2C minimizes the similarity between the shared and view-specific representations, thereby improving diversity. The driving principle of MVLSP-2C is that the latent space representation is obtained by optimally projecting it to match the original feature space representation on a view-by-view basis. Unlike pairwise consensus methods that enforce consistency between two views, matching on a view-by-view basis allows extensions to settings with more than two views. A related and important advantage of this per-view matching design is that a class view can be readily incorporated to learn a supervised representation that facilitates subsequent classification. As the class view is added without an assumption on the exclusivity of classes, MVLSP-2C is equally applicable to multiclass single-label and multilabel classifications. MVLSP-2C further optimizes the integration of latent variables based on their correlation. Extensive experiments in multiclass and multiview image datasets show that MVLSP-2C produces more accurate classification results as compared to state-of-the-art methods.
ISSN:2168-2216
2168-2232
DOI:10.1109/TSMC.2023.3327925