SCANET: Improving multimodal representation and fusion with sparse‐ and cross‐attention for multimodal sentiment analysis

Learning unimodal representations and improving multimodal fusion are two cores of multimodal sentiment analysis (MSA). However, previous methods ignore the information differences between different modalities: Text modality has high‐order semantic features than other modalities. In this article, we...

Full description

Saved in:
Bibliographic Details
Published inComputer animation and virtual worlds Vol. 33; no. 3-4
Main Authors Wang, Hao, Yang, Mingchuan, Li, Zheng, Liu, Zhenhua, Hu, Jie, Fu, Ziwang, Liu, Feng
Format Journal Article
LanguageEnglish
Published Chichester Wiley Subscription Services, Inc 01.06.2022
Subjects
Online AccessGet full text

Cover

Loading…