Multimodal Fusion with Dual-Attention Based on Textual Double-Embedding Networks for Rumor Detection

Rumors may bring a negative impact on social life, and compared with pure textual rumors, online rumors with multiple modalities at the same time are more likely to mislead users and spread, so multimodal rumor detection cannot be ignored. Current detection methods for multimodal rumors do not focus...

Full description

Saved in:
Bibliographic Details
Published inApplied sciences Vol. 13; no. 8; p. 4886
Main Authors Han, Huawei, Ke, Zunwang, Nie, Xiangyang, Dai, Li, Slamu, Wushour
Format Journal Article
LanguageEnglish
Published Basel MDPI AG 01.04.2023
Subjects
Online AccessGet full text

Cover

Loading…
Abstract Rumors may bring a negative impact on social life, and compared with pure textual rumors, online rumors with multiple modalities at the same time are more likely to mislead users and spread, so multimodal rumor detection cannot be ignored. Current detection methods for multimodal rumors do not focus on the fusion of text and picture-region object features, so we propose a multimodal fusion neural network TDEDA (dual-attention based on textual double embedding) applied to rumor detection, which performs a high-level information interaction at the text–image object level and captures visual features associated with keywords using an attention mechanism. In this way, we explored the ability to enhance feature representation with assistance from different modalities in rumor detection, as well as to capture the correlations of the dense interaction between images and text. We conducted comparative experiments on two multimodal rumor detection datasets. The experimental results showed that TDEDA could reasonably handle multimodal information and thus improve the accuracy of rumor detection compared with currently relevant multimodal rumor detection methods.
AbstractList Rumors may bring a negative impact on social life, and compared with pure textual rumors, online rumors with multiple modalities at the same time are more likely to mislead users and spread, so multimodal rumor detection cannot be ignored. Current detection methods for multimodal rumors do not focus on the fusion of text and picture-region object features, so we propose a multimodal fusion neural network TDEDA (dual-attention based on textual double embedding) applied to rumor detection, which performs a high-level information interaction at the text–image object level and captures visual features associated with keywords using an attention mechanism. In this way, we explored the ability to enhance feature representation with assistance from different modalities in rumor detection, as well as to capture the correlations of the dense interaction between images and text. We conducted comparative experiments on two multimodal rumor detection datasets. The experimental results showed that TDEDA could reasonably handle multimodal information and thus improve the accuracy of rumor detection compared with currently relevant multimodal rumor detection methods.
Audience Academic
Author Ke, Zunwang
Han, Huawei
Dai, Li
Nie, Xiangyang
Slamu, Wushour
Author_xml – sequence: 1
  givenname: Huawei
  surname: Han
  fullname: Han, Huawei
– sequence: 2
  givenname: Zunwang
  orcidid: 0000-0002-2589-8377
  surname: Ke
  fullname: Ke, Zunwang
– sequence: 3
  givenname: Xiangyang
  surname: Nie
  fullname: Nie, Xiangyang
– sequence: 4
  givenname: Li
  surname: Dai
  fullname: Dai, Li
– sequence: 5
  givenname: Wushour
  surname: Slamu
  fullname: Slamu, Wushour
BookMark eNptUV1rFDEUDVLBWvvkHxjwUabefEySeVy7rRaqgtTnkMnHmnVmsiYZav-9GddCERO493JyziHJeYlO5jg7hF5juKC0h3f6cMAUJJOSP0OnBARvKcPi5Mn8Ap3nvIe6ekwlhlNkPy1jCVO0emyulxzi3NyH8r3ZLnpsN6W4uazYe52dbepw536VetRs4zKMrr2aBmdtmHfNZ1fuY_qRGx9T83WZat264swqf4Weez1md_63n6Fv11d3lx_b2y8fbi43t61hQEvba08lwYO1gvBeDJQx7LkgkgttqAHTeYwxDNxp34HUzEgre-yt4UBo7-gZujn62qj36pDCpNODijqoP0BMO6VTCWZ0ihhBqRlMLZ4BYYMBGDzzAlvnCV293hy9Din-XFwuah-XNNfrKyKBMyGAd5V1cWTtdDUNs48laVO3dVMwNR8fKr4RHe576IBVAT4KTIo5J-eVCUWvn1SFYVQY1BqmehJm1bz9R_P4tP-xfwMMb6F_
CitedBy_id crossref_primary_10_1108_DTA_06_2023_0230
crossref_primary_10_3390_bdcc8100134
crossref_primary_10_1007_s41870_024_01984_x
crossref_primary_10_4018_IJITSA_348659
crossref_primary_10_3390_electronics13183757
crossref_primary_10_3390_app13137901
crossref_primary_10_3390_s23104666
crossref_primary_10_3390_app14198589
Cites_doi 10.1109/ICDE.2015.7113322
10.1007/978-3-030-47436-2_27
10.1145/2806416.2806607
10.1609/aaai.v32i1.11268
10.18653/v1/D17-1115
10.1145/3123266.3123454
10.1016/j.ipm.2020.102437
10.18653/v1/W16-0802
10.18653/v1/P18-1184
10.1109/TMM.2016.2617078
10.1109/CVPR.2016.10
10.1145/3269206.3271765
10.1109/CVPR.2019.00680
10.1145/3308558.3313552
10.1007/978-3-030-04503-6_4
10.24963/ijcai.2017/545
10.1109/ICDM.2013.61
10.1109/BigData47090.2019.9005556
10.1561/2200000006
10.1145/3485447.3511968
10.1109/CVPR.2016.90
10.1145/3219819.3219903
ContentType Journal Article
Copyright COPYRIGHT 2023 MDPI AG
2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Copyright_xml – notice: COPYRIGHT 2023 MDPI AG
– notice: 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
DBID AAYXX
CITATION
ABUWG
AFKRA
AZQEC
BENPR
CCPQU
DWQXO
PHGZM
PHGZT
PIMPY
PKEHL
PQEST
PQQKQ
PQUKI
DOA
DOI 10.3390/app13084886
DatabaseName CrossRef
ProQuest Central (Alumni)
ProQuest Central UK/Ireland
ProQuest Central Essentials
ProQuest Central
ProQuest One
ProQuest Central Korea
ProQuest Central Premium
ProQuest One Academic (New)
Publicly Available Content Database
ProQuest One Academic Middle East (New)
ProQuest One Academic Eastern Edition (DO NOT USE)
ProQuest One Academic
ProQuest One Academic UKI Edition
DOAJ Directory of Open Access Journals
DatabaseTitle CrossRef
Publicly Available Content Database
ProQuest Central
ProQuest One Academic Middle East (New)
ProQuest One Academic UKI Edition
ProQuest Central Essentials
ProQuest Central Korea
ProQuest One Academic Eastern Edition
ProQuest Central (Alumni Edition)
ProQuest One Community College
ProQuest Central (New)
ProQuest One Academic
ProQuest One Academic (New)
DatabaseTitleList CrossRef

Publicly Available Content Database

Database_xml – sequence: 1
  dbid: DOA
  name: DOAJ Directory of Open Access Journals
  url: https://www.doaj.org/
  sourceTypes: Open Website
– sequence: 2
  dbid: BENPR
  name: ProQuest Central
  url: https://www.proquest.com/central
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
Sciences (General)
EISSN 2076-3417
ExternalDocumentID oai_doaj_org_article_2c733cbc33cf4024bc00bf4f71def23e
A751990504
10_3390_app13084886
GeographicLocations Germany
GeographicLocations_xml – name: Germany
GroupedDBID .4S
2XV
5VS
7XC
8CJ
8FE
8FG
8FH
AADQD
AAFWJ
AAYXX
ADBBV
ADMLS
AFKRA
AFPKN
AFZYC
ALMA_UNASSIGNED_HOLDINGS
APEBS
ARCSS
BCNDV
BENPR
CCPQU
CITATION
CZ9
D1I
D1J
D1K
GROUPED_DOAJ
IAO
IGS
ITC
K6-
K6V
KC.
KQ8
L6V
LK5
LK8
M7R
MODMG
M~E
OK1
P62
PHGZM
PHGZT
PIMPY
PROAC
TUS
PMFND
ABUWG
AZQEC
DWQXO
PKEHL
PQEST
PQQKQ
PQUKI
PUEGO
ID FETCH-LOGICAL-c403t-9af3821bdd72697b3441f672867ac3c0c5f1110b6eaf508a4c8d891fdc60239e3
IEDL.DBID DOA
ISSN 2076-3417
IngestDate Wed Aug 27 01:29:12 EDT 2025
Mon Jun 30 11:19:09 EDT 2025
Tue Jun 10 20:25:52 EDT 2025
Tue Jul 01 04:33:07 EDT 2025
Thu Apr 24 22:52:14 EDT 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 8
Language English
License https://creativecommons.org/licenses/by/4.0
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c403t-9af3821bdd72697b3441f672867ac3c0c5f1110b6eaf508a4c8d891fdc60239e3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0002-2589-8377
OpenAccessLink https://doaj.org/article/2c733cbc33cf4024bc00bf4f71def23e
PQID 2806477065
PQPubID 2032433
ParticipantIDs doaj_primary_oai_doaj_org_article_2c733cbc33cf4024bc00bf4f71def23e
proquest_journals_2806477065
gale_infotracacademiconefile_A751990504
crossref_citationtrail_10_3390_app13084886
crossref_primary_10_3390_app13084886
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2023-04-01
PublicationDateYYYYMMDD 2023-04-01
PublicationDate_xml – month: 04
  year: 2023
  text: 2023-04-01
  day: 01
PublicationDecade 2020
PublicationPlace Basel
PublicationPlace_xml – name: Basel
PublicationTitle Applied sciences
PublicationYear 2023
Publisher MDPI AG
Publisher_xml – name: MDPI AG
References Bengio (ref_27) 2009; 2
ref_14
ref_36
ref_13
ref_35
ref_12
ref_34
ref_11
ref_10
ref_32
ref_31
ref_30
ref_19
ref_18
Jin (ref_22) 2016; 19
ref_17
ref_16
Hou (ref_24) 2019; 32
ref_15
Song (ref_7) 2021; 58
ref_25
ref_23
ref_21
ref_20
ref_1
ref_3
ref_2
ref_29
Boididou (ref_33) 2015; 3
ref_28
ref_26
ref_9
ref_8
ref_5
ref_4
ref_6
References_xml – ident: ref_1
  doi: 10.1109/ICDE.2015.7113322
– ident: ref_6
  doi: 10.1007/978-3-030-47436-2_27
– ident: ref_9
– ident: ref_30
– ident: ref_34
  doi: 10.1145/2806416.2806607
– ident: ref_32
– ident: ref_3
– volume: 32
  start-page: 12136
  year: 2019
  ident: ref_24
  article-title: Deep multimodal multilinear fusion with high-order polynomial pooling
  publication-title: Adv. Neural Inf. Process. Syst.
– ident: ref_21
  doi: 10.1609/aaai.v32i1.11268
– ident: ref_23
  doi: 10.18653/v1/D17-1115
– ident: ref_2
  doi: 10.1145/3123266.3123454
– ident: ref_11
– volume: 58
  start-page: 102437
  year: 2021
  ident: ref_7
  article-title: A multimodal fake news detection model based on crossmodal attention residual and multichannel convolutional neural networks
  publication-title: Inf. Process. Manag.
  doi: 10.1016/j.ipm.2020.102437
– ident: ref_12
  doi: 10.18653/v1/W16-0802
– ident: ref_16
– ident: ref_19
  doi: 10.18653/v1/P18-1184
– volume: 19
  start-page: 598
  year: 2016
  ident: ref_22
  article-title: Novel visual and statistical image features for microblogs news verification
  publication-title: IEEE Trans. Multimed.
  doi: 10.1109/TMM.2016.2617078
– ident: ref_28
  doi: 10.1109/CVPR.2016.10
– ident: ref_14
– ident: ref_29
  doi: 10.1145/3269206.3271765
– volume: 3
  start-page: 7
  year: 2015
  ident: ref_33
  article-title: Verifying multimedia use at mediaeval 2015
  publication-title: MediaEval
– ident: ref_35
– ident: ref_26
  doi: 10.1109/CVPR.2019.00680
– ident: ref_5
  doi: 10.1145/3308558.3313552
– ident: ref_17
  doi: 10.1007/978-3-030-04503-6_4
– ident: ref_25
– ident: ref_31
– ident: ref_18
  doi: 10.24963/ijcai.2017/545
– ident: ref_15
  doi: 10.1109/ICDM.2013.61
– ident: ref_20
  doi: 10.1109/BigData47090.2019.9005556
– ident: ref_13
– ident: ref_36
– volume: 2
  start-page: 1
  year: 2009
  ident: ref_27
  article-title: Learning deep architectures for AI
  publication-title: Found. Trends® Mach. Learn.
  doi: 10.1561/2200000006
– ident: ref_8
  doi: 10.1145/3485447.3511968
– ident: ref_10
  doi: 10.1109/CVPR.2016.90
– ident: ref_4
  doi: 10.1145/3219819.3219903
SSID ssj0000913810
Score 2.2967296
Snippet Rumors may bring a negative impact on social life, and compared with pure textual rumors, online rumors with multiple modalities at the same time are more...
SourceID doaj
proquest
gale
crossref
SourceType Open Website
Aggregation Database
Enrichment Source
Index Database
StartPage 4886
SubjectTerms attention mechanism
Deep learning
False information
Gossip
Multimedia
multimodal fusion
Neural networks
Research methodology
rumor detection
Social networks
SummonAdditionalLinks – databaseName: ProQuest Central
  dbid: BENPR
  link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfR3LbtQw0IL2AgfUFhALBflQiYdk4dhOnJyqXbqrisMKVa3UW2SPbS77aLvZ_2cm8S5FAi5WFE-ixOMZz3sYOytDKJUtSuGdlsIolYSTCkTpbQPeW--hD5CdV5c35vtteZsNbpscVrnjiT2jDmsgG_lX8gAaS06587t7QV2jyLuaW2g8ZYfIgmtUvg4n0_mPq72Vhape1oUcEvM06vfkF0a2XeO-rf44ivqK_f_iy_1hMztiL7KUyMcDWo_Zk7g6Yc8f1Q48YceZKjf8Uy4d_fklC30-7XId8OHZlgxhnAyt_GLrFmLcdUNsI5_g0RU4Xlwja8YpjmK0X0QxXfoY6DDj8yE6fMNRpuVX2yWOF7Hrw7ZWr9jNbHr97VLkPgoCjNSdaFzStSp8CFZVjfUaRaBUWVVX1oEGCWVCjid9FV1Cec0ZqEPdFClARamvUb9mB6v1Kr5hvK51UYF3KQK-pFLO-UJFg_CmTAB-xL7slrSFXGScel0sWlQ2aP3bR-s_Ymd74LuhtsbfwSaEmz0IFcTub6wffraZvloFVmvwgENCldh4kNInk2wRYlI6jthHwmxLZIsfBC5nH-BvUQGsdmxRlG1kKc2Ine6Q32Z63rS_d9_b_0-_Y8-oIf0Q23PKDrqHbXyPYkvnP-S9-QsnD-4k
  priority: 102
  providerName: ProQuest
Title Multimodal Fusion with Dual-Attention Based on Textual Double-Embedding Networks for Rumor Detection
URI https://www.proquest.com/docview/2806477065
https://doaj.org/article/2c733cbc33cf4024bc00bf4f71def23e
Volume 13
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1Lb9QwELagXOCAaAGxUFY-VOIhWTi2YyfHXbpLxWGFqlbqzfKM7dPutmKz_59xklaLBOLCxYqSSeSMx_OwZz4zdlbHWCtX1QKClsIolUWQCkUNrkUAB4B9guzKXlyb7zf1zcFRXyUnbIAHHhj3RaHTGgGpyRTrGEApIZvsqpiy0qloX7J5B8FUr4PbqkBXDQV5muL6sh9M6rohebW_maAeqf9v-rg3MssX7PnoHfLZ0Ktj9ihtT9izA8zAE3Y8zsYd_zhCRn96yWJfR7u5jfTycl8WwHhZYOXn-7AWs64bchr5nExW5HRxRSqZHnFyn2GdxGIDKRYjxldDVviOky_LL_cbas9T16drbV-x6-Xi6uuFGM9PEGik7kQbsm5UBTE6ZVsHmlyfbJ1qrAuoUWKdSdNJsClk8tOCwSY2bZUj2lLymvRrdrS93aY3jDeNrixCyAnpI1aFAJVKhuhNnRFhwj7fs9TjCC5ezrhYewoyCv_9Af8n7OyB-G7A1Pgz2byMzQNJAcLub5B4-FE8_L_EY8I-lJH1ZbpShzCMVQf0WwX4ys8cubCtrKWZsNP7wffjPN75su9sXNkKfvs_evOOPS3H1Q-ZP6fsqPu5T-_Jqelgyh43y29T9mS-WP24nPbS_AuMbfjD
linkProvider Directory of Open Access Journals
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtR3LbtQw0CrlABwQLSAWCvhQxEOycGwnTg4IbdkuW1r2gLZSb66fXPZRulkhfopvZCbJLkUCbr1YUexEyYzn4XkSsp-HkAud5cxZyZkSIjHLhWe505V3TjvnmwDZcTE6VZ_O8rMt8nOdC4NhlWue2DDqsPBoI3-LHkCl0Sn3_uIbw65R6F1dt9Bot8Vx_PEdjmzLd0cDwO8LIYaHkw8j1nUVYF5xWbPKJlmKzIWgRVFpJ0EhSIUWZaGtl577PAH9c1dEm0B7scqXoayyFHyBiaBRwntvkJtKygopqhx-3Nh0sMZmmfE2DRDmOXqhQUiUQCXFH4Kv6Q_wLynQiLbhPXK300lpv91EO2QrznfJnSuVCnfJTscDlvRVV6j69X0Smuzd2SLAw8MVmt0omnXpYGWnrF_XbSQlPQBBGShcTEAQwBQFpd1NIzucuRhQdNJxG4u-pKBB0y-rGYyDWDdBYvMH5PRa4PuQbM8X8_iI0LKUWeGdTdHDSwphrctEVLBe5cl71yNv1iA1vitpjp01pgaONgh_cwX-PbK_WXzRVvL4-7IDxM1mCZbfbm4sLr-ajpqN8FpK7zwMCQ7gynnOXVJJZyEmIWOPvETMGmQS8EHedrkO8FtYbsv0NSjOFc-56pG9NfJNxz2W5vdef_z_6efk1mjy-cScHI2Pn5DbAgDYRhXtke36chWfgsJUu2fNLqXk_LrJ4hcbmikT
linkToPdf http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtR3LbtQwcFS2EoIDogXUhQI-FPGQrDp2EicHhHbZXbUUraqqlXozfnLZR-lmhfg1vo5xHkuRgFsvVhRPomRmPDP2vAAOMucyLpOMGi0YTTkPVDNuaWZkaY2Rxtg6QHaaH12kny6zyy342eXCxLDKTibWgtotbTwjP4wewFRGp9xhaMMiTkeTD1ffaOwgFT2tXTuNhkVO_I_vuH1bvT8eIa1fcT4Zn388om2HAWpTJipa6iAKnhjnJM9LaQQaByGXvMiltsIymwWUBczkXge0ZHRqC1eUSXA2j0mhXuB778C2xF0R68H2cDw9Pduc8MSKm0XCmqRAIUoWfdKoMgpcM_kfarDuFvAvnVAruslDeNBaqGTQsNQObPnFLty_UbdwF3ZaibAib9qy1W8fgatzeedLhw9P1vEQjsRDXjJa6xkdVFUTV0mGqDYdwYtzxC1OETThzczT8dx4FxUpmTaR6SuC9jQ5W89xHPmqDhlbPIaLW8HwE-gtlgu_B6QoRJJbo4O3-JKca20S7lOET7NgrenDuw6lyrYFzmOfjZnCjU7Ev7qB_z4cbICvmroefwcbRtpsQGIx7vrG8vqrate24lYKYY3FIeB2PDWWMRPSIBPnAxe-D68jZVUUGfhBVreZD_hbsfiWGkg0o0uWsbQP-x3xVStLVuo35z_9__RLuItLQn0-np48g3sc8deEGO1Dr7pe--doPVXmRcumBL7c9sr4BcUELqU
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Multimodal+Fusion+with+Dual-Attention+Based+on+Textual+Double-Embedding+Networks+for+Rumor+Detection&rft.jtitle=Applied+sciences&rft.au=Huawei+Han&rft.au=Zunwang+Ke&rft.au=Xiangyang+Nie&rft.au=Li+Dai&rft.date=2023-04-01&rft.pub=MDPI+AG&rft.eissn=2076-3417&rft.volume=13&rft.issue=8&rft.spage=4886&rft_id=info:doi/10.3390%2Fapp13084886&rft.externalDBID=DOA&rft.externalDocID=oai_doaj_org_article_2c733cbc33cf4024bc00bf4f71def23e
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2076-3417&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2076-3417&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2076-3417&client=summon