Context-Based Adaptive Multimodal Fusion Network for Continuous Frame-Level Sentiment Prediction

Recently, video sentiment computing has become the focus of research because of its benefits in many applications such as digital marketing, education, healthcare, and so on. The difficulty of video sentiment prediction mainly lies in the regression accuracy of long-term sequences and how to integra...

Full description

Saved in:
Bibliographic Details
Published inIEEE/ACM transactions on audio, speech, and language processing Vol. 31; pp. 3468 - 3477
Main Authors Huang, Maochun, Qing, Chunmei, Tan, Junpeng, Xu, Xiangmin
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Recently, video sentiment computing has become the focus of research because of its benefits in many applications such as digital marketing, education, healthcare, and so on. The difficulty of video sentiment prediction mainly lies in the regression accuracy of long-term sequences and how to integrate different modalities. In particular, different modalities may express different emotions. In order to maintain the continuity of long time-series sentiments and mitigate the multimodal conflicts, this article proposes a novel Context-Based Adaptive Multimodal Fusion Network (CAMFNet) for consecutive frame-level sentiment prediction. A Context-based Transformer (CBT) module was specifically designed to embed clip features into continuous frame features, leveraging its capability to enhance the consistency of prediction results. Moreover, to resolve the multi-modal conflict between modalities, this article proposed an Adaptive multimodal fusion (AMF) method based on the self-attention mechanism. It can dynamically determines the degree of shared semantics across modalities, enabling the model to flexibly adapt its fusion strategy. Through adaptive fusion of multimodal features, the AMF method effectively resolves potential conflicts arising from diverse modalities, ultimately enhancing the overall performance of the model. The proposed CAMFNet for consecutive frame-level sentiment prediction can ensure the continuity of long time-series sentiments. Extensive experiments illustrate the superiority of the proposed method especially in multimodal conflicts videos.
ISSN:2329-9290
2329-9304
DOI:10.1109/TASLP.2023.3321971