Multimodal Fusion with Dual-Attention Based on Textual Double-Embedding Networks for Rumor Detection
Rumors may bring a negative impact on social life, and compared with pure textual rumors, online rumors with multiple modalities at the same time are more likely to mislead users and spread, so multimodal rumor detection cannot be ignored. Current detection methods for multimodal rumors do not focus...
Saved in:
Published in | Applied sciences Vol. 13; no. 8; p. 4886 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
Basel
MDPI AG
01.04.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Rumors may bring a negative impact on social life, and compared with pure textual rumors, online rumors with multiple modalities at the same time are more likely to mislead users and spread, so multimodal rumor detection cannot be ignored. Current detection methods for multimodal rumors do not focus on the fusion of text and picture-region object features, so we propose a multimodal fusion neural network TDEDA (dual-attention based on textual double embedding) applied to rumor detection, which performs a high-level information interaction at the text–image object level and captures visual features associated with keywords using an attention mechanism. In this way, we explored the ability to enhance feature representation with assistance from different modalities in rumor detection, as well as to capture the correlations of the dense interaction between images and text. We conducted comparative experiments on two multimodal rumor detection datasets. The experimental results showed that TDEDA could reasonably handle multimodal information and thus improve the accuracy of rumor detection compared with currently relevant multimodal rumor detection methods. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ISSN: | 2076-3417 2076-3417 |
DOI: | 10.3390/app13084886 |