SCANET: Improving multimodal representation and fusion with sparse‐ and cross‐attention for multimodal sentiment analysis
Learning unimodal representations and improving multimodal fusion are two cores of multimodal sentiment analysis (MSA). However, previous methods ignore the information differences between different modalities: Text modality has high‐order semantic features than other modalities. In this article, we...
Saved in:
Published in | Computer animation and virtual worlds Vol. 33; no. 3-4 |
---|---|
Main Authors | , , , , , , |
Format | Journal Article |
Language | English |
Published |
Chichester
Wiley Subscription Services, Inc
01.06.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Learning unimodal representations and improving multimodal fusion are two cores of multimodal sentiment analysis (MSA). However, previous methods ignore the information differences between different modalities: Text modality has high‐order semantic features than other modalities. In this article, we propose a sparse‐ and cross‐attention (SCANET) framework which has asymmetric architecture to improve performance of multimodal representation and fusion. Specifically, in the unimodal representation stage, we use sparse attention to improve the representation efficiency of two modalities and reduce the low‐order redundant features of audio and visual modalities. In the multimodal fusion stage, we design an innovative asymmetric fusion module, which utilizes audio and visual modality information matrix as weights to strengthen the target text modality. We also introduce contrastive learning to effectively enhance complementary features between modalities. We apply SCANET on the CMU‐MOSI and CMU‐MOSEI datasets, and experimental results show that our proposed method achieves state‐of‐the‐art performance.
We propose a sparse‐ and cross‐attention framework for multimodal sentiment analysis. First, we use sparse attention to improve the efficiency of representation learning. Then, we design an asymmetric fusion module which uses fused features as weights to reinforce the target modality. Further, we also introduce contrastive learning to efficiently enhance modality consistency and specificity information. |
---|---|
Bibliography: | Funding information China Telecom Corporation Limited Research Institute Research Funding, Grant/Award Number: I‐2022‐06 ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ISSN: | 1546-4261 1546-427X |
DOI: | 10.1002/cav.2090 |