AM³Net: Adaptive Mutual-Learning-Based Multimodal Data Fusion Network

Multimodal data fusion, e.g., hyperspectral image (HSI) and light detection and ranging (LiDAR) data fusion, plays an important role in object recognition and classification tasks. However, existing methods pay little attention to the specificity of HSI spectral channels and the complementarity of H...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on circuits and systems for video technology Vol. 32; no. 8; pp. 5411 - 5426
Main Authors Wang, Jinping, Li, Jun, Shi, Yanli, Lai, Jianhuang, Tan, Xiaojun
Format Journal Article
LanguageEnglish
Published New York IEEE 01.08.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
Abstract Multimodal data fusion, e.g., hyperspectral image (HSI) and light detection and ranging (LiDAR) data fusion, plays an important role in object recognition and classification tasks. However, existing methods pay little attention to the specificity of HSI spectral channels and the complementarity of HSI and LiDAR spatial information. In addition, the utilized feature extraction modules tend to consider the feature transmission processes among different modalities independently. Therefore, a new data fusion network named AM 3 Net is proposed for multimodal data classification; it includes three parts. First, an involution operator slides over the input HSI's spectral channels, which can independently measure the contribution rate of the spectral channel of each pixel to the spectral feature tensor construction. Furthermore, the spatial information of HSI and LiDAR data is integrated and excavated in an adaptively fused, modality-oriented manner. Second, a spectral-spatial mutual-guided module is designed for the feature collaborative transmission among spectral features and spatial information, which can increase the semantic relatedness connection through adaptive, multiscale, and mutual-learning transmission. Finally, the fused spatial-spectral features are embedded into a classification module to obtain the final results, which determines whether to continue updating the network weights. Experimental evaluations on HSI-LiDAR datasets indicate that AM 3 Net possesses a better feature representation ability than the state-of-the-art methods. Additionally, AM 3 Net still maintains considerable performance when its input is replaced with multispectral and synthetic aperture radar data. The result indicates that the proposed data fusion framework is compatible with diversified data types.
AbstractList Multimodal data fusion, e.g., hyperspectral image (HSI) and light detection and ranging (LiDAR) data fusion, plays an important role in object recognition and classification tasks. However, existing methods pay little attention to the specificity of HSI spectral channels and the complementarity of HSI and LiDAR spatial information. In addition, the utilized feature extraction modules tend to consider the feature transmission processes among different modalities independently. Therefore, a new data fusion network named AM3Net is proposed for multimodal data classification; it includes three parts. First, an involution operator slides over the input HSI’s spectral channels, which can independently measure the contribution rate of the spectral channel of each pixel to the spectral feature tensor construction. Furthermore, the spatial information of HSI and LiDAR data is integrated and excavated in an adaptively fused, modality-oriented manner. Second, a spectral-spatial mutual-guided module is designed for the feature collaborative transmission among spectral features and spatial information, which can increase the semantic relatedness connection through adaptive, multiscale, and mutual-learning transmission. Finally, the fused spatial-spectral features are embedded into a classification module to obtain the final results, which determines whether to continue updating the network weights. Experimental evaluations on HSI-LiDAR datasets indicate that AM3Net possesses a better feature representation ability than the state-of-the-art methods. Additionally, AM3Net still maintains considerable performance when its input is replaced with multispectral and synthetic aperture radar data. The result indicates that the proposed data fusion framework is compatible with diversified data types.
Multimodal data fusion, e.g., hyperspectral image (HSI) and light detection and ranging (LiDAR) data fusion, plays an important role in object recognition and classification tasks. However, existing methods pay little attention to the specificity of HSI spectral channels and the complementarity of HSI and LiDAR spatial information. In addition, the utilized feature extraction modules tend to consider the feature transmission processes among different modalities independently. Therefore, a new data fusion network named AM 3 Net is proposed for multimodal data classification; it includes three parts. First, an involution operator slides over the input HSI's spectral channels, which can independently measure the contribution rate of the spectral channel of each pixel to the spectral feature tensor construction. Furthermore, the spatial information of HSI and LiDAR data is integrated and excavated in an adaptively fused, modality-oriented manner. Second, a spectral-spatial mutual-guided module is designed for the feature collaborative transmission among spectral features and spatial information, which can increase the semantic relatedness connection through adaptive, multiscale, and mutual-learning transmission. Finally, the fused spatial-spectral features are embedded into a classification module to obtain the final results, which determines whether to continue updating the network weights. Experimental evaluations on HSI-LiDAR datasets indicate that AM 3 Net possesses a better feature representation ability than the state-of-the-art methods. Additionally, AM 3 Net still maintains considerable performance when its input is replaced with multispectral and synthetic aperture radar data. The result indicates that the proposed data fusion framework is compatible with diversified data types.
Author Li, Jun
Shi, Yanli
Tan, Xiaojun
Lai, Jianhuang
Wang, Jinping
Author_xml – sequence: 1
  givenname: Jinping
  orcidid: 0000-0002-4157-8605
  surname: Wang
  fullname: Wang, Jinping
  email: wangjp29@mail2.sysu.edu.cn
  organization: School of Intelligent Systems Engineering, Sun Yat-sen University, Guangzhou, China
– sequence: 2
  givenname: Jun
  surname: Li
  fullname: Li, Jun
  email: stslijun@mail.sysu.edu.cn
  organization: School of Intelligent Systems Engineering, Sun Yat-sen University, Guangzhou, China
– sequence: 3
  givenname: Yanli
  surname: Shi
  fullname: Shi, Yanli
  email: shiyli3@mail2.sysu.edu.cn
  organization: School of Intelligent Systems Engineering, Sun Yat-sen University, Guangzhou, China
– sequence: 4
  givenname: Jianhuang
  orcidid: 0000-0003-3883-2024
  surname: Lai
  fullname: Lai, Jianhuang
  email: stsljh@mail.sysu.edu.cn
  organization: School of Computer Science and Engineering, and the Key Laboratory of Machine Intelligence and Advanced Computing, Ministry of Education, Sun Yat-sen University, Guangzhou, China
– sequence: 5
  givenname: Xiaojun
  orcidid: 0000-0003-0137-9270
  surname: Tan
  fullname: Tan, Xiaojun
  email: tanxj@mail.sysu.edu.cn
  organization: School of Intelligent Systems Engineering, Sun Yat-sen University, Guangzhou, China
BookMark eNp9kLFOwzAQhi0EEm3hBWCJxJxiO3Zss5VCAamFgcIaGeeCXNKk2A6I5-INeDJcWjEwMN3p9H93p6-Pdpu2AYSOCB4SgtXpfHz_OB9STOkwI0xSLnZQj3AuU0ox34095iSVlPB91Pd-gXFMMdFDk9Hs6_MWwlkyKvUq2DdIZl3odJ1OQbvGNs_pufZQxmkd7LItdZ1c6KCTSedt2yQRfW_dywHaq3Tt4XBbB-hhcjkfX6fTu6ub8WiaGqp4SNkTNwJnApRgTGJeVaWRJfBKVkBZKZgCIMbwnJHcQA5MGImFopRlWFYUsgE62exdufa1Ax-KRdu5Jp4saK4EF4QJElNykzKu9d5BVRgbdIj_BqdtXRBcrK0VP9aKtbViay2i9A-6cnap3cf_0PEGsgDwC6hcSaLy7Bs0LHpK
CODEN ITCTEM
CitedBy_id crossref_primary_10_1109_JSTARS_2024_3361556
crossref_primary_10_3390_rs16061055
crossref_primary_10_1016_j_inffus_2023_102192
crossref_primary_10_1109_TCSVT_2024_3375511
crossref_primary_10_1109_TGRS_2024_3401231
crossref_primary_10_3390_rs16203825
crossref_primary_10_1109_TGRS_2025_3529749
crossref_primary_10_1088_1361_6501_ad52b9
crossref_primary_10_3390_rs16162942
crossref_primary_10_1109_TCSVT_2023_3306870
crossref_primary_10_1016_j_neucom_2024_128663
crossref_primary_10_1016_j_knosys_2023_111190
crossref_primary_10_1016_j_inffus_2024_102745
crossref_primary_10_1109_TGRS_2024_3374372
crossref_primary_10_1080_01431161_2024_2408037
crossref_primary_10_3390_s22062191
crossref_primary_10_1109_TCSVT_2023_3322470
crossref_primary_10_1109_TGRS_2023_3331486
crossref_primary_10_1016_j_eswa_2024_125903
crossref_primary_10_1109_TCSVT_2022_3215513
crossref_primary_10_1109_TGRS_2024_3367960
crossref_primary_10_1002_cpe_8023
crossref_primary_10_3934_era_2024190
crossref_primary_10_1109_JSEN_2024_3364150
crossref_primary_10_1016_j_imavis_2025_105445
crossref_primary_10_1109_TCSVT_2022_3227172
crossref_primary_10_1109_JSTARS_2024_3415729
crossref_primary_10_1016_j_sigpro_2023_109165
crossref_primary_10_3390_rs16173120
crossref_primary_10_1007_s10489_024_06217_z
crossref_primary_10_1016_j_neucom_2022_02_058
crossref_primary_10_1109_TGRS_2024_3430373
crossref_primary_10_1016_j_inffus_2024_102897
crossref_primary_10_1109_TGRS_2024_3519900
crossref_primary_10_1109_TGRS_2024_3356510
crossref_primary_10_1109_TNNLS_2022_3227167
crossref_primary_10_3390_rs15102692
crossref_primary_10_1109_TGRS_2023_3331717
crossref_primary_10_1109_TGRS_2023_3283508
crossref_primary_10_1016_j_knosys_2023_110442
crossref_primary_10_1109_TIM_2024_3394468
crossref_primary_10_3390_rs16214029
crossref_primary_10_1109_TBDATA_2024_3433494
crossref_primary_10_3390_rs14071741
crossref_primary_10_1016_j_eswa_2024_124158
crossref_primary_10_1109_TGRS_2024_3353775
crossref_primary_10_1109_TCSVT_2023_3272984
crossref_primary_10_3390_rs16214073
crossref_primary_10_1109_TGRS_2023_3344698
crossref_primary_10_1109_TGRS_2024_3442470
crossref_primary_10_1109_TGRS_2024_3452700
crossref_primary_10_1109_TNNLS_2023_3303273
crossref_primary_10_3788_LOP230540
crossref_primary_10_1109_TGRS_2025_3526190
Cites_doi 10.1016/j.scitotenv.2021.146469
10.1109/TGRS.2007.905311
10.1109/TGRS.2019.2907932
10.1109/IGARSS.2017.8127079
10.1109/TCSVT.2017.2746684
10.1109/CVPR46437.2021.01214
10.1109/CVPRW50498.2020.00054
10.1109/TGRS.2020.2969024
10.1109/CVPR.2017.195
10.1109/TCYB.2018.2864670
10.1109/TIP.2018.2867198
10.1016/j.neucom.2005.12.126
10.1109/JSTARS.2015.2432037
10.1109/JSTARS.2021.3054392
10.1109/LGRS.2018.2816958
10.1109/TCSVT.2020.2975566
10.1109/TGRS.2017.2686450
10.1109/LGRS.2017.2704625
10.1080/15481603.2017.1364837
10.1109/TIT.1968.1054102
10.1109/TGRS.2018.2794326
10.1109/TGRS.2017.2726901
10.1109/TIP.2021.3055613
10.1109/CVPR.2017.75
10.1016/j.rse.2021.112322
10.1109/TCSVT.2021.3056725
10.3390/rs9080868
10.1109/CVPR46437.2021.01132
10.1109/ICCV.2017.89
10.1109/TGRS.2017.2756851
10.3390/f11030303
10.1109/LGRS.2018.2869888
10.1109/TGRS.2015.2421051
10.1109/TCSVT.2020.3027616
10.1109/JSTARS.2019.2913206
10.1016/j.rse.2019.111323
10.1109/TCSVT.2021.3054584
10.1109/LGRS.2022.3156622
10.1109/TGRS.2018.2860125
10.1109/LGRS.2008.2001282
10.1109/CVPR.2019.00953
10.1109/TGRS.2020.2982064
10.1109/JSTSP.2012.2208177
10.1016/j.inffus.2016.05.004
10.1109/LGRS.2017.2687519
10.1109/IGARSS.2017.8127330
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022
DBID 97E
RIA
RIE
AAYXX
CITATION
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
DOI 10.1109/TCSVT.2022.3148257
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005–Present
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
DatabaseTitle CrossRef
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
DatabaseTitleList Technology Research Database

Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 1558-2205
EndPage 5426
ExternalDocumentID 10_1109_TCSVT_2022_3148257
9698196
Genre orig-research
GrantInformation_xml – fundername: Southern Marine Science and Engineering Guangdong Laboratory (Zhuhai)
  grantid: SML2020SP011
– fundername: Key-Area Research and Development Program of Guangdong Province
  grantid: 2020B090921003
GroupedDBID -~X
0R~
29I
4.4
5GY
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACGFO
ACGFS
ACIWK
AENEX
AETIX
AGQYO
AGSQL
AHBIQ
AI.
AIBXA
AKJIK
AKQYR
ALLEH
ALMA_UNASSIGNED_HOLDINGS
ASUFR
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
EBS
EJD
HZ~
H~9
ICLAB
IFIPE
IFJZH
IPLJI
JAVBF
LAI
M43
O9-
OCL
P2P
RIA
RIE
RNS
RXW
TAE
TN5
VH1
AAYXX
CITATION
RIG
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
ID FETCH-LOGICAL-c295t-4b5c7037e9744805ffdc8de5f8fe24d749ee1cc56416ce6e47c8079224308f2e3
IEDL.DBID RIE
ISSN 1051-8215
IngestDate Mon Jun 30 06:00:13 EDT 2025
Tue Jul 01 00:41:17 EDT 2025
Thu Apr 24 23:08:00 EDT 2025
Wed Aug 27 02:23:36 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 8
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c295t-4b5c7037e9744805ffdc8de5f8fe24d749ee1cc56416ce6e47c8079224308f2e3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0003-3883-2024
0000-0003-0137-9270
0000-0002-4157-8605
PQID 2697571471
PQPubID 85433
PageCount 16
ParticipantIDs proquest_journals_2697571471
crossref_primary_10_1109_TCSVT_2022_3148257
crossref_citationtrail_10_1109_TCSVT_2022_3148257
ieee_primary_9698196
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2022-08-01
PublicationDateYYYYMMDD 2022-08-01
PublicationDate_xml – month: 08
  year: 2022
  text: 2022-08-01
  day: 01
PublicationDecade 2020
PublicationPlace New York
PublicationPlace_xml – name: New York
PublicationTitle IEEE transactions on circuits and systems for video technology
PublicationTitleAbbrev TCSVT
PublicationYear 2022
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref13
ref15
ref14
ref52
ref11
ref10
ref17
ref16
ref19
ref18
krizhevsky (ref34) 2012; 25
liu (ref12) 2021
ref51
ref50
wang (ref38) 2021
ref46
ref45
ref48
ref47
ref42
ref41
ref44
ref49
ref8
ref9
ref4
ref6
ref5
ref40
ref35
ref37
ref36
ref31
gu (ref7) 2015; 53
ref33
ref32
ref2
ref1
ref39
mäyrä (ref3) 2021; 256
li (ref24) 2020
sabour (ref43) 2017; 30
ref23
ref26
ref25
ref20
ref22
ref21
ref28
ref27
ref29
chen (ref30) 2017
References_xml – ident: ref5
  doi: 10.1016/j.scitotenv.2021.146469
– ident: ref41
  doi: 10.1109/TGRS.2007.905311
– year: 2020
  ident: ref24
  article-title: A³CLNN: Spatial, spectral and multiscale attention ConvLSTM neural network for multisource remote sensing data classification
  publication-title: IEEE Trans Neural Netw Learn Syst
– ident: ref19
  doi: 10.1109/TGRS.2019.2907932
– ident: ref46
  doi: 10.1109/IGARSS.2017.8127079
– ident: ref16
  doi: 10.1109/TCSVT.2017.2746684
– ident: ref36
  doi: 10.1109/CVPR46437.2021.01214
– ident: ref26
  doi: 10.1109/CVPRW50498.2020.00054
– volume: 25
  start-page: 1097
  year: 2012
  ident: ref34
  article-title: ImageNet classification with deep convolutional neural networks
  publication-title: Proc Adv Neural Inf Process Syst (NIPS)
– ident: ref25
  doi: 10.1109/TGRS.2020.2969024
– ident: ref35
  doi: 10.1109/CVPR.2017.195
– ident: ref22
  doi: 10.1109/TCYB.2018.2864670
– year: 2021
  ident: ref12
  article-title: Global-local balanced low-rank approximation of hyperspectral images for classification
  publication-title: IEEE Trans Circuits Syst Video Technol
– ident: ref18
  doi: 10.1109/TIP.2018.2867198
– ident: ref47
  doi: 10.1016/j.neucom.2005.12.126
– ident: ref44
  doi: 10.1109/JSTARS.2015.2432037
– ident: ref21
  doi: 10.1109/JSTARS.2021.3054392
– ident: ref11
  doi: 10.1109/LGRS.2018.2816958
– ident: ref17
  doi: 10.1109/TCSVT.2020.2975566
– ident: ref45
  doi: 10.1109/TGRS.2017.2686450
– ident: ref52
  doi: 10.1109/LGRS.2017.2704625
– ident: ref13
  doi: 10.1080/15481603.2017.1364837
– ident: ref8
  doi: 10.1109/TIT.1968.1054102
– ident: ref42
  doi: 10.1109/TGRS.2018.2794326
– ident: ref50
  doi: 10.1109/TGRS.2017.2726901
– ident: ref20
  doi: 10.1109/TIP.2021.3055613
– ident: ref29
  doi: 10.1109/CVPR.2017.75
– volume: 256
  year: 2021
  ident: ref3
  article-title: Tree species classification from airborne hyperspectral and LiDAR data using 3D convolutional neural networks
  publication-title: Remote Sens Environ
  doi: 10.1016/j.rse.2021.112322
– ident: ref1
  doi: 10.1109/TCSVT.2021.3056725
– ident: ref14
  doi: 10.3390/rs9080868
– ident: ref31
  doi: 10.1109/CVPR46437.2021.01132
– ident: ref32
  doi: 10.1109/ICCV.2017.89
– ident: ref23
  doi: 10.1109/TGRS.2017.2756851
– ident: ref49
  doi: 10.3390/f11030303
– year: 2017
  ident: ref30
  article-title: Rethinking atrous convolution for semantic image segmentation
  publication-title: arXiv 1706 05587
– ident: ref51
  doi: 10.1109/LGRS.2018.2869888
– volume: 53
  start-page: 5312
  year: 2015
  ident: ref7
  article-title: A novel MKL model of integrating LiDAR data and MSI for urban area classification
  publication-title: IEEE Trans Geosci Remote Sens
  doi: 10.1109/TGRS.2015.2421051
– ident: ref10
  doi: 10.1109/TCSVT.2020.3027616
– ident: ref15
  doi: 10.1109/JSTARS.2019.2913206
– ident: ref4
  doi: 10.1016/j.rse.2019.111323
– ident: ref39
  doi: 10.1109/TCSVT.2021.3054584
– ident: ref37
  doi: 10.1109/LGRS.2022.3156622
– volume: 30
  start-page: 1
  year: 2017
  ident: ref43
  article-title: Dynamic routing between capsules
  publication-title: Proc Adv Neural Inf Process Syst (NIPS)
– ident: ref28
  doi: 10.1109/TGRS.2018.2860125
– ident: ref40
  doi: 10.1109/LGRS.2008.2001282
– ident: ref33
  doi: 10.1109/CVPR.2019.00953
– ident: ref48
  doi: 10.1109/TGRS.2020.2982064
– ident: ref6
  doi: 10.1109/JSTSP.2012.2208177
– ident: ref2
  doi: 10.1016/j.inffus.2016.05.004
– year: 2021
  ident: ref38
  article-title: UNFusion: A unified multi-scale densely connected network for infrared and visible image fusion
  publication-title: IEEE Trans Circuits Syst Video Technol
– ident: ref9
  doi: 10.1109/LGRS.2017.2687519
– ident: ref27
  doi: 10.1109/IGARSS.2017.8127330
SSID ssj0014847
Score 2.6376326
Snippet Multimodal data fusion, e.g., hyperspectral image (HSI) and light detection and ranging (LiDAR) data fusion, plays an important role in object recognition and...
SourceID proquest
crossref
ieee
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 5411
SubjectTerms adaptive mutual-learning
Adaptive systems
and multimodal data classification
Channels
Classification
Convolution
Convolutional neural networks
data fusion
Data integration
Feature extraction
Hyperspectral imaging
Involution networks
Kernel
Laser radar
Learning
Lidar
Modules
Object recognition
Radar data
Spatial data
Synthetic aperture radar
Tensors
Title AM³Net: Adaptive Mutual-Learning-Based Multimodal Data Fusion Network
URI https://ieeexplore.ieee.org/document/9698196
https://www.proquest.com/docview/2697571471
Volume 32
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3NThsxEB4FTu2BQgERCGgP3IqT_fGubW4hbRQhJZcGlNtq1x5zgCao7F76Wn2DPhlj7yaiBaHefLAta8b2N2PPfANwLpKk0FGELEWFjBdcMWnCkIlCliLRScJ9-bbpLJvc8OtFuujAxSYXBhF98Bn2XdP_5ZuVrt1T2UBligAs24ItctyaXK3NjwGXvpgYmQsRk4Rj6wSZUA3mo--3c3IF45g8VE4ukfgLhHxVlVdXsceX8SeYrlfWhJXc9-uq7Otf_5A2_u_Sd2GnNTSDYbMz9qCDy8_w8QX94D6Mh9M_v2dYXQZDUzy6ey-Y1i6fhLWsq3fsikDOBD5L98fK0Hxfi6oIxrV7YwtmTQj5AdyMv81HE9bWVWA6VmnFeJlqOugCyZfgMkytNVoaTK20GHMjuEKMtE4zMtY0ZsiFlqFQBPZJKG2MySFsL1dLPIJAWRVZrhJZEsw5pnbr2HiMJKOsjGyGXYjWgs51Szrual885N75CFXulZM75eStcrrwZTPmsaHceLf3vpP2pmcr6C701vrM21P5lMeZEqmICI-P3x51Ah_c3E2AXw-2q581npLRUZVnfrc9Ayeh0UU
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3NUhQxEO4CPIAH_5ByEXUO3iTL_CSTxNsKbi3I7MXF4jY1k3Q4oLsUzFx4Ld_AJ7OTmd1SpChvc0gyqe4kX3fS_TXAe5lllUkSZAI1Ml5xzZSNYyYrVcvMZBkP5duKaT454yfn4nwN9le5MIgYgs9w6D_DW75dmNZflR3oXBOA5evwiHBfpF221urNgKtQTowMhoQpQrJlikysD2aHX7_NyBlMU_JROTlF8i8YCnVV_jmMA8KMn0KxnFsXWHI5bJt6aG7v0Db-7-SfwZPe1IxG3dp4Dms4fwGP_yAg3IbxqPj1c4rNx2hkqyt_8kVF6zNKWM-7esE-EczZKOTp_lhYGu-oaqpo3PpbtmjaBZG_hLPx59nhhPWVFZhJtWgYr4WhrS6RvAmuYuGcNcqicMphyq3kGjExRuRkrhnMkUujYqkJ7rNYuRSzHdiYL-b4CiLtdOK4zlRNQOe52p3n47GKzLI6cTkOIFkKujQ97bivfvG9DO5HrMugnNIrp-yVM4APqz5XHenGg623vbRXLXtBD2Bvqc-y35c3ZZprKWRCiLx7f693sDmZFafl6fH0y2vY8v_pwv32YKO5bvENmSBN_TasvN9Bh9SP
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=AM%C2%B3Net%3A+Adaptive+Mutual-Learning-Based+Multimodal+Data+Fusion+Network&rft.jtitle=IEEE+transactions+on+circuits+and+systems+for+video+technology&rft.au=Wang%2C+Jinping&rft.au=Li%2C+Jun&rft.au=Shi%2C+Yanli&rft.au=Lai%2C+Jianhuang&rft.date=2022-08-01&rft.issn=1051-8215&rft.eissn=1558-2205&rft.volume=32&rft.issue=8&rft.spage=5411&rft.epage=5426&rft_id=info:doi/10.1109%2FTCSVT.2022.3148257&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TCSVT_2022_3148257
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1051-8215&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1051-8215&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1051-8215&client=summon