Joint training strategy of unimodal and multimodal for multimodal sentiment analysis

With the explosive growth of social media video content, research on multimodal sentiment analysis (MSA) has attracted considerable attention recently. Despite significant progress in MSA, there remains challenges: current research mostly focuses on learning either unimodal features or aspects of mu...

Full description

Saved in:
Bibliographic Details
Published inImage and vision computing Vol. 149; p. 105172
Main Authors Li, Meng, Zhu, Zhenfang, Li, Kefeng, Zhou, Lihua, Zhao, Zhen, Pei, Hongli
Format Journal Article
LanguageEnglish
Published Elsevier B.V 01.09.2024
Subjects
Online AccessGet full text
ISSN0262-8856
DOI10.1016/j.imavis.2024.105172

Cover

Loading…
More Information
Summary:With the explosive growth of social media video content, research on multimodal sentiment analysis (MSA) has attracted considerable attention recently. Despite significant progress in MSA, there remains challenges: current research mostly focuses on learning either unimodal features or aspects of multimodal interactions, neglecting the importance of simultaneously considering both unimodal features and intermodal interactions. To address the aforementioned challenges, this paper proposes a fusion strategy called Joint Training of Unimodal and Multimodal (JTUM). Specifically, this strategy combines unimodal label generation module with cross-modal transformer. The unimodal label generation module aims to generate more distinctive labels for each unimodal input, facilitating more effective learning of unimodal representations. Meanwhile, cross-modal transformer is designed to treat each modality as a target modality and optimize it using other modalities as source modalities, thereby learning the interactions between each pair of modalities. By jointly training unimodal and multimodal tasks, our model can focus on individual modality features while learning the interactions between modalities. Finally, to better capture temporal information and make predictions, we also added self-attention transformer as sequence models. Experimental results on the CMU-MOSI and CMU-MOSEI datasets demonstrate that JTUM outperforms current main methods. •Jointly training unimodal and multimodal tasks to optimize multimodal fusion.•Using two modules for unimodal and multimodal learning.•The proposed model achieves competitive results compared to latest baselines.
ISSN:0262-8856
DOI:10.1016/j.imavis.2024.105172