Unsupervised Cross-modal Hashing with Modality-interaction

Recently, numerous unsupervised cross-modal hashing methods have been proposed to deal the image-text retrieval tasks for the unlabeled cross-modal data. However, when these methods learn to generate hash codes, almost all of them lack modality-interaction in the following two aspects: (1) The insta...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on circuits and systems for video technology Vol. 33; no. 9; p. 1
Main Authors Tu, Rong-Cheng, Jiang, Jie, Lin, Qinghong, Cai, Chengfei, Tian, Shangxuan, Wang, Hongfa, Liu, Wei
Format Journal Article
LanguageEnglish
Published New York IEEE 01.09.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
Abstract Recently, numerous unsupervised cross-modal hashing methods have been proposed to deal the image-text retrieval tasks for the unlabeled cross-modal data. However, when these methods learn to generate hash codes, almost all of them lack modality-interaction in the following two aspects: (1) The instance similarity matrix used to guide the hashing networks training is constructed without image-text interaction, which fails to capture the fine-grained cross-modal cues to elaborately characterize the intrinsic semantic similarity among the datapoints. (2) The binary codes used for quantization loss are inferior because they are generated by directly quantizing a simple combination of continuous hash codes from different modalities without the interaction among these continuous hash codes. Such problems will cause the generated hash codes to be of poor quality and degrade the retrieval performance. Hence, in this paper, we propose a novel Unsupervised Cross-modal Hashing with Modality-interaction, termed UCHM. Specifically, by optimizing a novel hash-similarity-friendly loss, a modality-interaction-enabled (MIE) similarity generator is first trained to generate a superior MIE similarity matrix for the training set. Then, the generated MIE similarity matrix is utilized as guiding information to train the deep hashing networks. Furthermore, during the process of training the hashing networks, a novel bit-selection module is proposed to generate high-quality unified binary codes for the quantization loss with the interaction among continuous codes from different modalities, thereby further enhancing the retrieval performance. Extensive experiments on two widely used datasets show that the proposed UCHM outperforms state-of-the-art techniques on cross-modal retrieval tasks.
AbstractList Recently, numerous unsupervised cross-modal hashing methods have been proposed to deal the image-text retrieval tasks for the unlabeled cross-modal data. However, when these methods learn to generate hash codes, almost all of them lack modality-interaction in the following two aspects: (1) The instance similarity matrix used to guide the hashing networks training is constructed without image-text interaction, which fails to capture the fine-grained cross-modal cues to elaborately characterize the intrinsic semantic similarity among the datapoints. (2) The binary codes used for quantization loss are inferior because they are generated by directly quantizing a simple combination of continuous hash codes from different modalities without the interaction among these continuous hash codes. Such problems will cause the generated hash codes to be of poor quality and degrade the retrieval performance. Hence, in this paper, we propose a novel Unsupervised Cross-modal Hashing with Modality-interaction, termed UCHM. Specifically, by optimizing a novel hash-similarity-friendly loss, a modality-interaction-enabled (MIE) similarity generator is first trained to generate a superior MIE similarity matrix for the training set. Then, the generated MIE similarity matrix is utilized as guiding information to train the deep hashing networks. Furthermore, during the process of training the hashing networks, a novel bit-selection module is proposed to generate high-quality unified binary codes for the quantization loss with the interaction among continuous codes from different modalities, thereby further enhancing the retrieval performance. Extensive experiments on two widely used datasets show that the proposed UCHM outperforms state-of-the-art techniques on cross-modal retrieval tasks.
Author Tu, Rong-Cheng
Cai, Chengfei
Tian, Shangxuan
Lin, Qinghong
Liu, Wei
Wang, Hongfa
Jiang, Jie
Author_xml – sequence: 1
  givenname: Rong-Cheng
  orcidid: 0000-0002-9567-159X
  surname: Tu
  fullname: Tu, Rong-Cheng
  organization: Tencent, China
– sequence: 2
  givenname: Jie
  orcidid: 0000-0001-9658-5127
  surname: Jiang
  fullname: Jiang, Jie
  organization: Tencent Data Platform, Shenzhen, Guangdong, China
– sequence: 3
  givenname: Qinghong
  surname: Lin
  fullname: Lin, Qinghong
– sequence: 4
  givenname: Chengfei
  surname: Cai
  fullname: Cai, Chengfei
  organization: Tencent Data Platform, Shenzhen, Guangdong, China
– sequence: 5
  givenname: Shangxuan
  surname: Tian
  fullname: Tian, Shangxuan
  organization: Tencent Data Platform, Shenzhen, Guangdong, China
– sequence: 6
  givenname: Hongfa
  surname: Wang
  fullname: Wang, Hongfa
  organization: Tencent Data Platform, Shenzhen, Guangdong, China
– sequence: 7
  givenname: Wei
  orcidid: 0000-0002-3865-8145
  surname: Liu
  fullname: Liu, Wei
  organization: Tencent Data Platform, Shenzhen, Guangdong, China
BookMark eNp9kMFOwzAMhiM0JLbBCyAOlTh3OGnTJNxQBQxpiAMb1yhtXZZpa0eSgfb2tGwHxIGTLcufrf8bkUHTNkjIJYUJpaBu5vnr23zCgCWThHGaKH5ChpRzGTMGfND1wGksGeVnZOT9CoCmMhVDcrto_G6L7tN6rKLctd7Hm7Yy62hq_NI279GXDcvouR_ZsI9tE9CZMti2OSentVl7vDjWMVk83M_zaTx7eXzK72ZxyVQWYqmQAy9EYQBlWoIRCRjOqgKxQGWKskJRUybSVCmZ1VhlzDBIeVEZagrAZEyuD3e3rv3YoQ961e5c073UTGY0zYToco8JO2yVfQaHtd46uzFuryno3pH-caR7R_roqIPkH6i0wfThgjN2_T96dUAtIv76BVwwrpJvY-x3qw
CODEN ITCTEM
CitedBy_id crossref_primary_10_1007_s11042_024_19371_w
crossref_primary_10_3390_e26110911
crossref_primary_10_1016_j_engappai_2024_108969
crossref_primary_10_1109_TCSVT_2024_3350695
crossref_primary_10_1016_j_neucom_2024_128844
crossref_primary_10_1109_JPROC_2024_3525147
crossref_primary_10_1109_TKDE_2024_3396492
crossref_primary_10_1016_j_neucom_2024_127911
crossref_primary_10_1016_j_engappai_2023_106473
crossref_primary_10_1007_s13042_024_02477_w
crossref_primary_10_1016_j_eswa_2024_125592
crossref_primary_10_1016_j_ins_2023_119222
crossref_primary_10_1007_s13735_024_00326_8
crossref_primary_10_1016_j_ipm_2024_104037
crossref_primary_10_1007_s11227_024_06643_3
crossref_primary_10_1007_s00530_024_01539_x
crossref_primary_10_1007_s11042_023_18048_0
crossref_primary_10_1016_j_ins_2023_120064
crossref_primary_10_1145_3674507
crossref_primary_10_1109_TCSVT_2024_3430904
crossref_primary_10_1109_TCSVT_2024_3491865
crossref_primary_10_1109_TGRS_2024_3406606
crossref_primary_10_1109_TIFS_2025_3534585
crossref_primary_10_1109_TIFS_2023_3346176
crossref_primary_10_1016_j_knosys_2024_112111
crossref_primary_10_1109_TCSVT_2024_3472122
Cites_doi 10.1109/TCSVT.2021.3070129
10.1109/CVPR.2017.672
10.1109/TCYB.2016.2608906
10.1109/TPAMI.2019.2940446
10.1007/s11263-016-0981-7
10.1109/TCSVT.2022.3172716
10.1109/TIE.2018.2873547
10.1109/ICASSP43922.2022.9746965
10.24963/ijcai.2019/662
10.24963/ijcai.2019/138
10.1109/CVPR42600.2020.01267
10.1214/aoms/1177729586
10.1109/CVPR.2017.348
10.1145/3397271.3401086
10.1109/TCSVT.2020.2974877
10.1109/TIP.2019.2897944
10.1109/CVPR.2018.00446
10.3115/v1/D14-1179
10.1145/2463676.2465274
10.1145/3343031.3350869
10.1145/3123266.3123326
10.1109/TNNLS.2020.2967597
10.1007/978-3-030-01246-5_13
10.1109/TMM.2020.2984081
10.1145/3512527.3531381
10.24963/ijcai.2020/479
10.1016/j.neucom.2020.02.043
10.1109/CVPR.2014.267
10.1609/aaai.v28i1.8995
10.1145/2911996.2912000
10.1109/TIP.2017.2676345
10.24963/ijcai.2018/396
10.1609/aaai.v32i1.11263
10.1109/ICASSP43922.2022.9746251
10.1109/ICCV.2019.00312
10.1109/TCSVT.2019.2911359
10.1145/3372278.3390673
10.1109/CVPR.2018.00636
10.1109/TCSVT.2020.2991171
10.1109/TKDE.2022.3187023
10.1109/TKDE.2020.2987312
10.1016/j.cviu.2009.03.008
10.1109/CVPR42600.2020.00319
10.1109/ICCV.2017.439
10.1609/aaai.v35i5.16592
10.1109/TCSVT.2017.2723302
10.1109/TCSVT.2022.3182549
10.24963/ijcai.2018/349
10.1109/TPAMI.2016.2577031
10.1145/2600428.2609610
10.18653/v1/2021.findings-acl.66
10.1109/CVPR.2009.5206848
10.1145/3442381.3449825
10.1109/TPAMI.2021.3088863
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023
DBID 97E
RIA
RIE
AAYXX
CITATION
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
DOI 10.1109/TCSVT.2023.3251395
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005-present
IEEE All-Society Periodicals Package (ASPP) 1998-Present
IEEE Electronic Library (IEL)
CrossRef
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
DatabaseTitle CrossRef
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
DatabaseTitleList
Technology Research Database
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 1558-2205
EndPage 1
ExternalDocumentID 10_1109_TCSVT_2023_3251395
10057259
Genre orig-research
GroupedDBID -~X
0R~
29I
4.4
5GY
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACGFO
ACGFS
ACIWK
AENEX
AGQYO
AHBIQ
AKJIK
AKQYR
ALMA_UNASSIGNED_HOLDINGS
ASUFR
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
EBS
HZ~
IFIPE
IPLJI
JAVBF
LAI
M43
O9-
OCL
P2P
RIA
RIE
RNS
RXW
TAE
TN5
5VS
AAYXX
AETIX
AGSQL
AI.
AIBXA
ALLEH
CITATION
EJD
H~9
ICLAB
IFJZH
RIG
VH1
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
ID FETCH-LOGICAL-c296t-89e505b7ba0e84c0a730a52dbeebe9abcde7f127449986fed62a2045bda1ab0e3
IEDL.DBID RIE
ISSN 1051-8215
IngestDate Mon Jun 30 10:17:10 EDT 2025
Tue Jul 01 00:41:20 EDT 2025
Thu Apr 24 22:52:35 EDT 2025
Mon Aug 11 03:35:36 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 9
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c296t-89e505b7ba0e84c0a730a52dbeebe9abcde7f127449986fed62a2045bda1ab0e3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0002-9567-159X
0000-0001-9658-5127
0000-0002-3865-8145
PQID 2861467702
PQPubID 85433
PageCount 1
ParticipantIDs ieee_primary_10057259
crossref_primary_10_1109_TCSVT_2023_3251395
proquest_journals_2861467702
crossref_citationtrail_10_1109_TCSVT_2023_3251395
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2023-09-01
PublicationDateYYYYMMDD 2023-09-01
PublicationDate_xml – month: 09
  year: 2023
  text: 2023-09-01
  day: 01
PublicationDecade 2020
PublicationPlace New York
PublicationPlace_xml – name: New York
PublicationTitle IEEE transactions on circuits and systems for video technology
PublicationTitleAbbrev TCSVT
PublicationYear 2023
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref13
ref57
ref12
ref15
ref59
ref14
ref53
ref52
ref11
ref55
ref10
ref54
radford (ref43) 2021
ref17
ref16
ref19
ref18
ref51
ref50
glorot (ref46) 2011
faghri (ref56) 2017
ref48
ref47
ref42
wang (ref24) 2015
ref41
ref44
simonyan (ref45) 2014
lin (ref58) 2014
ref49
ref8
ref7
tu (ref62) 2022
ref9
ref4
ref3
ref6
ref5
ref40
ref35
ref34
ref36
ref31
ref30
ref33
ref32
kumar (ref37) 2011
ref2
ref1
ref38
zhao (ref21) 2022
ref23
ref26
ref25
kingma (ref63) 2014
ref20
ref64
ref22
ref65
ref28
ref27
ref29
hu (ref39) 2023; 45
ref60
ref61
References_xml – ident: ref13
  doi: 10.1109/TCSVT.2021.3070129
– ident: ref14
  doi: 10.1109/CVPR.2017.672
– ident: ref41
  doi: 10.1109/TCYB.2016.2608906
– ident: ref34
  doi: 10.1109/TPAMI.2019.2940446
– year: 2014
  ident: ref45
  article-title: Very deep convolutional networks for large-scale image recognition
  publication-title: arXiv 1409 1556
– year: 2022
  ident: ref62
  article-title: Unsupervised hashing with semantic concept mining
  publication-title: arXiv 2209 11475
– start-page: 349
  year: 2022
  ident: ref21
  article-title: Class concentration with twin variational autoencoders for unsupervised cross-modal hashing
  publication-title: Proc Asian Conf Comput Vis
– ident: ref53
  doi: 10.1007/s11263-016-0981-7
– ident: ref17
  doi: 10.1109/TCSVT.2022.3172716
– ident: ref23
  doi: 10.1109/TIE.2018.2873547
– ident: ref32
  doi: 10.1109/ICASSP43922.2022.9746965
– ident: ref30
  doi: 10.24963/ijcai.2019/662
– ident: ref28
  doi: 10.24963/ijcai.2019/138
– ident: ref51
  doi: 10.1109/CVPR42600.2020.01267
– ident: ref65
  doi: 10.1214/aoms/1177729586
– ident: ref35
  doi: 10.1109/CVPR.2017.348
– ident: ref20
  doi: 10.1145/3397271.3401086
– ident: ref4
  doi: 10.1109/TCSVT.2020.2974877
– ident: ref10
  doi: 10.1109/TIP.2019.2897944
– ident: ref36
  doi: 10.1109/CVPR.2018.00446
– ident: ref55
  doi: 10.3115/v1/D14-1179
– ident: ref16
  doi: 10.1145/2463676.2465274
– ident: ref44
  doi: 10.1145/3343031.3350869
– ident: ref31
  doi: 10.1145/3123266.3123326
– ident: ref49
  doi: 10.1109/TNNLS.2020.2967597
– start-page: 740
  year: 2014
  ident: ref58
  article-title: Microsoft COCO: Common objects in context
  publication-title: Proc Eur Conf Comput Vis
– ident: ref11
  doi: 10.1007/978-3-030-01246-5_13
– ident: ref2
  doi: 10.1109/TMM.2020.2984081
– ident: ref42
  doi: 10.1145/3512527.3531381
– ident: ref60
  doi: 10.24963/ijcai.2020/479
– year: 2017
  ident: ref56
  article-title: VSE++: Improving visual-semantic embeddings with hard negatives
  publication-title: arXiv 1707 05612
– ident: ref27
  doi: 10.1016/j.neucom.2020.02.043
– ident: ref9
  doi: 10.1109/CVPR.2014.267
– ident: ref25
  doi: 10.1609/aaai.v28i1.8995
– ident: ref3
  doi: 10.1145/2911996.2912000
– ident: ref33
  doi: 10.1109/TIP.2017.2676345
– start-page: 3890
  year: 2015
  ident: ref24
  article-title: Semantic topic multimodal hashing for cross-media retrieval
  publication-title: Proc 24th Int Joint Conf Artif Intell
– ident: ref22
  doi: 10.24963/ijcai.2018/396
– start-page: 8748
  year: 2021
  ident: ref43
  article-title: Learning transferable visual models from natural language supervision
  publication-title: Proc 38th Int Conf Mach Learn
– ident: ref57
  doi: 10.1609/aaai.v32i1.11263
– ident: ref40
  doi: 10.1109/ICASSP43922.2022.9746251
– ident: ref18
  doi: 10.1109/ICCV.2019.00312
– ident: ref6
  doi: 10.1109/TCSVT.2019.2911359
– ident: ref19
  doi: 10.1145/3372278.3390673
– volume: 45
  start-page: 3877
  year: 2023
  ident: ref39
  article-title: Unsupervised contrastive cross-modal hashing
  publication-title: IEEE Trans Pattern Anal Mach Intell
– ident: ref54
  doi: 10.1109/CVPR.2018.00636
– ident: ref12
  doi: 10.1109/TCSVT.2020.2991171
– ident: ref8
  doi: 10.1109/TKDE.2022.3187023
– start-page: 1
  year: 2011
  ident: ref37
  article-title: Learning hash functions for cross-view similarity search
  publication-title: Proc 22nd Int Joint Conf Artif Intell
– ident: ref7
  doi: 10.1109/TKDE.2020.2987312
– ident: ref59
  doi: 10.1016/j.cviu.2009.03.008
– ident: ref1
  doi: 10.1109/CVPR42600.2020.00319
– start-page: 315
  year: 2011
  ident: ref46
  article-title: Deep sparse rectifier neural networks
  publication-title: Proc 14th Int Conf Artif Intell Statist
– ident: ref26
  doi: 10.1109/ICCV.2017.439
– ident: ref15
  doi: 10.1609/aaai.v35i5.16592
– ident: ref5
  doi: 10.1109/TCSVT.2017.2723302
– ident: ref50
  doi: 10.1109/TCSVT.2022.3182549
– ident: ref29
  doi: 10.24963/ijcai.2018/349
– ident: ref52
  doi: 10.1109/TPAMI.2016.2577031
– ident: ref38
  doi: 10.1145/2600428.2609610
– year: 2014
  ident: ref63
  article-title: Adam: A method for stochastic optimization
  publication-title: arXiv 1412 6980
– ident: ref48
  doi: 10.18653/v1/2021.findings-acl.66
– ident: ref64
  doi: 10.1109/CVPR.2009.5206848
– ident: ref61
  doi: 10.1145/3442381.3449825
– ident: ref47
  doi: 10.1109/TPAMI.2021.3088863
SSID ssj0014847
Score 2.6096253
Snippet Recently, numerous unsupervised cross-modal hashing methods have been proposed to deal the image-text retrieval tasks for the unlabeled cross-modal data....
SourceID proquest
crossref
ieee
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 1
SubjectTerms Binary codes
Bit-selection
Cross-modal Retrieval
Generators
Hash functions
Hashing
Modal data
Modality-interaction
Networks
Quantization (signal)
Retrieval
Semantics
Similarity
Task analysis
Training
Title Unsupervised Cross-modal Hashing with Modality-interaction
URI https://ieeexplore.ieee.org/document/10057259
https://www.proquest.com/docview/2861467702
Volume 33
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LTwIxEG6Ekx58YkTR7MGb6bovdltvhkiICVwEw23Tx3BRgcBy8dc7090lRKPxttm0TdNpO1_bme9j7Nai14klAMfpBDyBSHCRKs3FLOxmUkiTWXrRHY7SwSR5nnanVbK6y4UBABd8Bj59urd8uzAbuirDFY7oAvF6gzXw5FYma22fDBLh1MQQL4RcoCOrM2QCeT_uvbyOfRIK92P05zGJSex4ISer8mMvdg6mf8RGddfKuJI3f1No33x-Y238d9-P2WEFNb3Hcm6csD2Yn7KDHQLCM_Ywma83S9ou1mC9HvWVfyws1hqUIkse3dN6Q_qFcJ0TucSqTIVosUn_adwb8EpNgZtIpgUXEhDt6EyrAERiAoVrW3UjqwHtKJU2FrJZSISBeAJLZ2DTSBFXvbYqVDqA-Jw154s5XDBPaJFFaRYmIGcI6IyIjVYSoSIWlSDjNgvr0c1NRTVOihfvuTtyBDJ3FsnJInllkTa729ZZlkQbf5Zu0RDvlCxHt806tRXzajGu80ik5A-yILr8pdoV26fWy9ixDmsWqw1cI9go9I2bZF8qOs-G
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Nj9MwEB1BOSwcWFiKKJQlB27IId-xuaGKKgttL7Sr3iJ_TC9AuurHZX_9zjhpVS1axC2KbMXy2H7Psec9gI-OUCdViIKGE4oMEylkoY2QqzgvlVS2dHyiO50V1SL7vsyXXbK6z4VBRH_5DEN-9Gf5bm33_KuMZjixC-Lrj-EJAX8et-lax0ODTHo_MWIMsZAEZYccmUh9no9-Xs9DtgoPU0L0lO0kTnDIG6v8tRp7iBmfw-zQuPZmya9wvzOhvb2n2_jfrX8BzzuyGXxtR8dLeITNBTw7kSB8BV8WzXZ_wwvGFl0w4raKP2tHtarWZingP7XBlF8RYRcsL7FpkyH6sBh_m48q0fkpCJuoYiekQuI7pjQ6QpnZSNPs1nniDFIklTbWYbmKWTKQ9mDFCl2RaFarN07H2kSYvoZes27wDQTSyDIpyjhDtSJKZ2VqjVZEFqmoQpUOID70bm07sXH2vPhd-01HpGofkZojUncRGcCnY52bVmrjn6X73MUnJdveHcDwEMW6m47bOpEFI0IZJW8fqPYBzqr5dFJPrmY_3sFT_lJ7k2wIvd1mj--JeuzMpR9wdym-0s8
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Unsupervised+Cross-modal+Hashing+with+Modality-interaction&rft.jtitle=IEEE+transactions+on+circuits+and+systems+for+video+technology&rft.au=Tu%2C+Rong-Cheng&rft.au=Jiang%2C+Jie&rft.au=Lin%2C+Qinghong&rft.au=Cai%2C+Chengfei&rft.date=2023-09-01&rft.pub=IEEE&rft.issn=1051-8215&rft.spage=1&rft.epage=1&rft_id=info:doi/10.1109%2FTCSVT.2023.3251395&rft.externalDocID=10057259
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1051-8215&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1051-8215&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1051-8215&client=summon