Deep Adaptively-Enhanced Hashing With Discriminative Similarity Guidance for Unsupervised Cross-Modal Retrieval

Cross-modal hashing that leverages hash functions to project high-dimensional data from different modalities into the compact common hamming space, has shown immeasurable potential in cross-modal retrieval. To ease labor costs, unsupervised cross-modal hashing methods are proposed. However, existing...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on circuits and systems for video technology Vol. 32; no. 10; pp. 7255 - 7268
Main Authors Shi, Yufeng, Zhao, Yue, Liu, Xin, Zheng, Feng, Ou, Weihua, You, Xinge, Peng, Qinmu
Format Journal Article
LanguageEnglish
Published New York IEEE 01.10.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
Abstract Cross-modal hashing that leverages hash functions to project high-dimensional data from different modalities into the compact common hamming space, has shown immeasurable potential in cross-modal retrieval. To ease labor costs, unsupervised cross-modal hashing methods are proposed. However, existing unsupervised methods still suffer from two factors in the optimization of hash functions: 1) similarity guidance, they barely give a clear definition of whether is similar or not between data points, leading to the residual of the redundant information; 2) optimization strategy, they ignore the fact that the similarity learning abilities of different hash functions are different, which makes the hash function of one modality weaker than the hash function of the other modality. To alleviate such limitations, this paper proposes an unsupervised cross-modal hashing method to train hash functions with discriminative similarity guidance and adaptively-enhanced optimization strategy, termed Deep Adaptively-Enhanced Hashing (DAEH). Specifically, to estimate the similarity relations with discriminability, Information Mixed Similarity Estimation (IMSE) is designed by integrating information from distance distributions and the similarity ratio. Moreover, Adaptive Teacher Guided Enhancement (ATGE) optimization strategy is also designed, which employs information theory to discover the weaker hash function and utilizes an extra teacher network to enhance it. Extensive experiments on three benchmark datasets demonstrate the superiority of the proposed DAEH against the state-of-the-arts.
AbstractList Cross-modal hashing that leverages hash functions to project high-dimensional data from different modalities into the compact common hamming space, has shown immeasurable potential in cross-modal retrieval. To ease labor costs, unsupervised cross-modal hashing methods are proposed. However, existing unsupervised methods still suffer from two factors in the optimization of hash functions: 1) similarity guidance, they barely give a clear definition of whether is similar or not between data points, leading to the residual of the redundant information; 2) optimization strategy, they ignore the fact that the similarity learning abilities of different hash functions are different, which makes the hash function of one modality weaker than the hash function of the other modality. To alleviate such limitations, this paper proposes an unsupervised cross-modal hashing method to train hash functions with discriminative similarity guidance and adaptively-enhanced optimization strategy, termed Deep Adaptively-Enhanced Hashing (DAEH). Specifically, to estimate the similarity relations with discriminability, Information Mixed Similarity Estimation (IMSE) is designed by integrating information from distance distributions and the similarity ratio. Moreover, Adaptive Teacher Guided Enhancement (ATGE) optimization strategy is also designed, which employs information theory to discover the weaker hash function and utilizes an extra teacher network to enhance it. Extensive experiments on three benchmark datasets demonstrate the superiority of the proposed DAEH against the state-of-the-arts.
Author Zhao, Yue
Ou, Weihua
Shi, Yufeng
Zheng, Feng
Peng, Qinmu
Liu, Xin
You, Xinge
Author_xml – sequence: 1
  givenname: Yufeng
  orcidid: 0000-0002-9217-4352
  surname: Shi
  fullname: Shi, Yufeng
  email: yufengshi17@hust.edu.cn
  organization: School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan, China
– sequence: 2
  givenname: Yue
  surname: Zhao
  fullname: Zhao, Yue
  email: zhaoyhu@hubu.edu.cn
  organization: School of Computer Science and Information Engineering, Hubei University, Wuhan, China
– sequence: 3
  givenname: Xin
  surname: Liu
  fullname: Liu, Xin
  email: xliu@hqu.edu.cn
  organization: Department of Computer Science, Huaqiao University, Xiamen, China
– sequence: 4
  givenname: Feng
  orcidid: 0000-0002-1701-9141
  surname: Zheng
  fullname: Zheng, Feng
  email: f.zheng@ieee.org
  organization: Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
– sequence: 5
  givenname: Weihua
  orcidid: 0000-0001-5241-7703
  surname: Ou
  fullname: Ou, Weihua
  email: ouweihuahust@gmail.com
  organization: School of Big Data and Computer Science, Guizhou Normal University, Guiyang, China
– sequence: 6
  givenname: Xinge
  orcidid: 0000-0003-0607-1777
  surname: You
  fullname: You, Xinge
  email: youxg@hust.edu.cn
  organization: School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan, China
– sequence: 7
  givenname: Qinmu
  orcidid: 0000-0003-4863-5681
  surname: Peng
  fullname: Peng, Qinmu
  email: pengqinmu@hust.edu.cn
  organization: School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan, China
BookMark eNp9kMtOwzAQRS0EEuXxA7CxxDrFdh52llWAFgmEBC0sI9ceU6PgBDut1L_HoRULFqw8i3vueM4JOnStA4QuKBlTSsrrefXyOh8zwtg4pZxxWhygEc1zkTBG8sM4k5wmgtH8GJ2E8EEIzUTGR6i9AejwRMuutxtotsmtW0mnQOOZDCvr3vGb7Vf4xgbl7ad1cojhlzg20tt-i6drqwcAm9bjhQvrDvzGhlhQ-TaE5LHVssHP0HsLG9mcoSMjmwDn-_cULe5u59UseXia3leTh0SxMu-Tki9VSRksdSo5E6nJlC4IcGBaLxXlRU6NMCQlxrAyMypVGmJYU1HKZUlUeoqudr2db7_WEPr6o117F1fWjDOaxU6SxpTYpdTwVw-mVraPJ7au99I2NSX1oLf-0VsPeuu93oiyP2gXBUm__R-63EEWAH6BkhdCkDz9Bkb6iys
CODEN ITCTEM
CitedBy_id crossref_primary_10_1109_TMM_2023_3323884
crossref_primary_10_1016_j_ins_2023_119543
crossref_primary_10_1007_s13735_023_00268_7
crossref_primary_10_1007_s11042_024_19371_w
crossref_primary_10_1109_TCSVT_2023_3319633
crossref_primary_10_3390_e26110911
crossref_primary_10_1016_j_engappai_2024_108969
crossref_primary_10_1109_TMM_2023_3349075
crossref_primary_10_1007_s13042_024_02154_y
crossref_primary_10_1109_TCSVT_2024_3350695
crossref_primary_10_1109_TCSVT_2024_3489886
crossref_primary_10_1016_j_neucom_2024_127911
crossref_primary_10_1145_3697353
crossref_primary_10_1016_j_engappai_2024_108197
crossref_primary_10_1007_s10489_023_04715_0
crossref_primary_10_1109_TCSVT_2024_3411298
crossref_primary_10_1007_s41019_024_00274_7
crossref_primary_10_1016_j_engappai_2023_106473
crossref_primary_10_1007_s10462_025_11152_7
crossref_primary_10_1016_j_ipm_2024_103958
crossref_primary_10_1007_s13735_024_00326_8
crossref_primary_10_1109_TCSVT_2023_3320444
crossref_primary_10_1109_TCSVT_2023_3293104
crossref_primary_10_1109_ACCESS_2024_3444817
crossref_primary_10_1016_j_knosys_2024_112547
crossref_primary_10_3390_s23073439
crossref_primary_10_1007_s11042_023_18048_0
crossref_primary_10_1016_j_imavis_2025_105421
crossref_primary_10_1109_TCSVT_2023_3251395
crossref_primary_10_1109_TCSVT_2023_3281868
crossref_primary_10_3390_app13127278
crossref_primary_10_1109_TCSVT_2023_3312385
crossref_primary_10_1109_TCSVT_2024_3374791
crossref_primary_10_1016_j_neucom_2024_128830
crossref_primary_10_1109_TCSVT_2024_3376373
crossref_primary_10_1109_TMM_2023_3245400
crossref_primary_10_1109_TCSVT_2023_3340102
crossref_primary_10_1109_TCSVT_2023_3285266
crossref_primary_10_1007_s13735_025_00353_z
crossref_primary_10_1109_TCSVT_2023_3287301
crossref_primary_10_3390_app14020870
crossref_primary_10_1109_TCSVT_2023_3263054
Cites_doi 10.1109/TPAMI.2018.2798607
10.1609/aaai.v35i5.16592
10.1109/TCSVT.2017.2723302
10.1109/CVPR.2015.7299011
10.1109/TKDE.2020.2987312
10.1145/2463676.2465274
10.1109/TPAMI.2019.2932976
10.1109/CVPR.2018.00446
10.1109/TMM.2019.2922128
10.1609/aaai.v31i1.10719
10.1109/ICCV48922.2021.00986
10.1109/TCSVT.2020.3042972
10.1145/3123266.3123345
10.1145/3372278.3390673
10.1145/1460096.1460104
10.1145/3362065
10.1109/CVPR42600.2020.00319
10.1109/TCSVT.2017.2705068
10.1007/978-3-319-10602-1_48
10.1109/TIP.2020.2963957
10.1109/CVPR.2016.90
10.1109/ICME.2019.00015
10.1109/MSP.2017.2738401
10.1109/TKDE.2020.2974825
10.1145/3323873.3325041
10.1016/j.patcog.2021.108084
10.1145/2600428.2609610
10.1609/aaai.v32i1.11263
10.1609/aaai.v33i01.3301176
10.1145/3397271.3401086
10.1007/s11280-020-00859-y
10.24963/ijcai.2018/148
10.1109/CVPR.2017.348
10.1109/ICME51207.2021.9428330
10.1109/TIP.2016.2607421
10.1109/CVPR.2009.5206848
10.1109/TCSVT.2020.2974877
10.1109/TIP.2018.2821921
10.1109/ICCV.2019.00312
10.24963/ijcai.2018/396
10.1145/3474085.3475286
10.1145/1646396.1646452
10.1109/TCSVT.2019.2911359
10.1145/3240508.3240684
10.1016/j.patcog.2020.107479
10.1109/TPAMI.2019.2940446
10.1109/TKDE.2020.2970050
10.1109/CVPR.2017.672
10.1109/TPAMI.2021.3055564
10.1145/3343031.3351055
10.1145/3460426.3463626
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022
DBID 97E
RIA
RIE
AAYXX
CITATION
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
DOI 10.1109/TCSVT.2022.3172716
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005–Present
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Xplore
CrossRef
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
DatabaseTitle CrossRef
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
DatabaseTitleList
Technology Research Database
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
Computer Science
EISSN 1558-2205
EndPage 7268
ExternalDocumentID 10_1109_TCSVT_2022_3172716
9768805
Genre orig-research
GrantInformation_xml – fundername: Open Project of Zhejiang Laboratory
  grantid: 2021KH0AB01
– fundername: National Natural Science Foundation of China
  grantid: 62172177; 62101179; 61762021
  funderid: 10.13039/501100001809
– fundername: Project of Hubei University School
  grantid: 202011903000002
  funderid: 10.13039/501100017589
– fundername: Key Research and Development Plan of Hubei Province
  grantid: 2020BAB027
– fundername: Natural Science Foundation of Hubei Province
  grantid: 2021CFB332
  funderid: 10.13039/501100003819
GroupedDBID -~X
0R~
29I
4.4
5GY
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACGFO
ACGFS
ACIWK
AENEX
AETIX
AGQYO
AGSQL
AHBIQ
AI.
AIBXA
AKJIK
AKQYR
ALLEH
ALMA_UNASSIGNED_HOLDINGS
ASUFR
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
EBS
EJD
HZ~
H~9
ICLAB
IFIPE
IFJZH
IPLJI
JAVBF
LAI
M43
O9-
OCL
P2P
RIA
RIE
RNS
RXW
TAE
TN5
VH1
AAYXX
CITATION
RIG
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
ID FETCH-LOGICAL-c295t-97bc912ebd3a7283f4cd60e7e2ddbc17651f8f030ff294fc3cdeebdd189ab90c3
IEDL.DBID RIE
ISSN 1051-8215
IngestDate Sun Jun 29 16:36:18 EDT 2025
Tue Jul 01 00:41:17 EDT 2025
Thu Apr 24 23:07:11 EDT 2025
Wed Aug 27 02:14:17 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 10
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c295t-97bc912ebd3a7283f4cd60e7e2ddbc17651f8f030ff294fc3cdeebdd189ab90c3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0002-9217-4352
0000-0003-4863-5681
0000-0003-0607-1777
0000-0001-5241-7703
0000-0002-1701-9141
PQID 2721428303
PQPubID 85433
PageCount 14
ParticipantIDs proquest_journals_2721428303
crossref_citationtrail_10_1109_TCSVT_2022_3172716
crossref_primary_10_1109_TCSVT_2022_3172716
ieee_primary_9768805
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2022-10-01
PublicationDateYYYYMMDD 2022-10-01
PublicationDate_xml – month: 10
  year: 2022
  text: 2022-10-01
  day: 01
PublicationDecade 2020
PublicationPlace New York
PublicationPlace_xml – name: New York
PublicationTitle IEEE transactions on circuits and systems for video technology
PublicationTitleAbbrev TCSVT
PublicationYear 2022
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref13
ref57
ref12
Mikriukov (ref21) 2022
ref15
ref14
ref58
ref53
ref11
ref55
ref10
ref54
Chen (ref42) 2019
Simonyan (ref41) 2014
ref17
ref16
ref19
ref18
ref51
ref50
ref46
ref48
ref47
ref44
ref43
ref49
ref8
ref7
ref9
ref4
ref3
ref6
ref5
ref40
ref35
ref34
ref37
ref36
Xu (ref1) 2013
ref31
ref30
ref33
ref32
ref2
ref39
ref38
ref24
ref23
ref26
ref25
ref20
ref22
Wang (ref52)
ref28
ref27
ref29
Dosovitskiy (ref45) 2020
Krizhevsky (ref56); 25
References_xml – ident: ref3
  doi: 10.1109/TPAMI.2018.2798607
– ident: ref19
  doi: 10.1609/aaai.v35i5.16592
– ident: ref39
  doi: 10.1109/TCSVT.2017.2723302
– year: 2022
  ident: ref21
  article-title: Deep unsupervised contrastive hashing for large-scale cross-modal text-image retrieval in remote sensing
  publication-title: arXiv:2201.08125
– ident: ref36
  doi: 10.1109/CVPR.2015.7299011
– ident: ref12
  doi: 10.1109/TKDE.2020.2987312
– ident: ref22
  doi: 10.1145/2463676.2465274
– ident: ref44
  doi: 10.1109/TPAMI.2019.2932976
– ident: ref28
  doi: 10.1109/CVPR.2018.00446
– ident: ref23
  doi: 10.1109/TMM.2019.2922128
– ident: ref31
  doi: 10.1609/aaai.v31i1.10719
– ident: ref58
  doi: 10.1109/ICCV48922.2021.00986
– ident: ref8
  doi: 10.1109/TCSVT.2020.3042972
– ident: ref32
  doi: 10.1145/3123266.3123345
– ident: ref40
  doi: 10.1145/3372278.3390673
– ident: ref48
  doi: 10.1145/1460096.1460104
– ident: ref6
  doi: 10.1145/3362065
– start-page: 3890
  volume-title: Proc. Int. Joint Conf. Artif. Intell.
  ident: ref52
  article-title: Semantic topic multimodal hashing for cross-media retrieval
– ident: ref25
  doi: 10.1109/CVPR42600.2020.00319
– ident: ref4
  doi: 10.1109/TCSVT.2017.2705068
– year: 2020
  ident: ref45
  article-title: An image is worth 16x16 words: Transformers for image recognition at scale
  publication-title: arXiv:2010.11929
– ident: ref49
  doi: 10.1007/978-3-319-10602-1_48
– ident: ref29
  doi: 10.1109/TIP.2020.2963957
– ident: ref57
  doi: 10.1109/CVPR.2016.90
– ident: ref33
  doi: 10.1109/ICME.2019.00015
– ident: ref2
  doi: 10.1109/MSP.2017.2738401
– ident: ref10
  doi: 10.1109/TKDE.2020.2974825
– ident: ref35
  doi: 10.1145/3323873.3325041
– ident: ref13
  doi: 10.1016/j.patcog.2021.108084
– ident: ref51
  doi: 10.1145/2600428.2609610
– ident: ref24
  doi: 10.1609/aaai.v32i1.11263
– ident: ref55
  doi: 10.1609/aaai.v33i01.3301176
– ident: ref18
  doi: 10.1145/3397271.3401086
– ident: ref20
  doi: 10.1007/s11280-020-00859-y
– ident: ref43
  doi: 10.24963/ijcai.2018/148
– volume: 25
  start-page: 1097
  volume-title: Proc. Adv. Neural Inf. Process. Syst. (NIPS)
  ident: ref56
  article-title: ImageNet classification with deep convolutional neural networks
– ident: ref27
  doi: 10.1109/CVPR.2017.348
– ident: ref38
  doi: 10.1109/ICME51207.2021.9428330
– ident: ref53
  doi: 10.1109/TIP.2016.2607421
– ident: ref54
  doi: 10.1109/CVPR.2009.5206848
– ident: ref14
  doi: 10.1109/TCSVT.2020.2974877
– year: 2013
  ident: ref1
  article-title: A survey on multi-view learning
  publication-title: arXiv:1304.5634
– ident: ref34
  doi: 10.1109/TIP.2018.2821921
– ident: ref16
  doi: 10.1109/ICCV.2019.00312
– ident: ref30
  doi: 10.24963/ijcai.2018/396
– year: 2019
  ident: ref42
  article-title: Med3D: Transfer learning for 3D medical image analysis
  publication-title: arXiv:1904.00625
– ident: ref47
  doi: 10.1145/3474085.3475286
– year: 2014
  ident: ref41
  article-title: Very deep convolutional networks for large-scale image recognition
  publication-title: arXiv:1409.1556
– ident: ref50
  doi: 10.1145/1646396.1646452
– ident: ref11
  doi: 10.1109/TCSVT.2019.2911359
– ident: ref5
  doi: 10.1145/3240508.3240684
– ident: ref17
  doi: 10.1016/j.patcog.2020.107479
– ident: ref37
  doi: 10.1109/TPAMI.2019.2940446
– ident: ref9
  doi: 10.1109/TKDE.2020.2970050
– ident: ref15
  doi: 10.1109/CVPR.2017.672
– ident: ref46
  doi: 10.1109/TPAMI.2021.3055564
– ident: ref7
  doi: 10.1145/3343031.3351055
– ident: ref26
  doi: 10.1145/3460426.3463626
SSID ssj0014847
Score 2.614143
Snippet Cross-modal hashing that leverages hash functions to project high-dimensional data from different modalities into the compact common hamming space, has shown...
SourceID proquest
crossref
ieee
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 7255
SubjectTerms Annotations
Codes
Computer science
Cross-modal retrieval
Data points
Estimation
Hash functions
Information theory
Optimization
optimization strategy
Retrieval
Semantics
Similarity
similarity estimation
Teachers
unsupervised deep hashing
Title Deep Adaptively-Enhanced Hashing With Discriminative Similarity Guidance for Unsupervised Cross-Modal Retrieval
URI https://ieeexplore.ieee.org/document/9768805
https://www.proquest.com/docview/2721428303
Volume 32
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3PT9swFH4CTtsBNthEB5t82I25JI7TxEdUYNWk7gDtxi2K7WdRgdJqTQ7w1_PsJBViaNotkZ4tK9_z-xE_vw_gq8ysRgp1uR5Jw6WIHdeWXklYGqFya0I7hunP0WQuf9ykN1vwbXMXBhFD8RkO_WM4y7dL0_hfZafkOknd0m3YpsStvau1OTGQeSATo3Ah5jn5sf6CTKROZ-PrXzNKBYWgDJX8tec2f-aEAqvKX6Y4-JfLPZj2K2vLSu6GTa2H5vFF08b_Xfo72O0CTXbWasZ72MJqH_Z6EgfW7el9ePusI-EBLM8RV-zMlitvBu8f-EV1G2oE2KRlXWK_F_UtO194a-OraLwYu6ZHypApoGffm4X1AxgFw2xerZuVN0ZrmmDsPwWfLi2t6irweJGSf4D55cVsPOEdJwMn5NKaq0wbFQvUNikzCk2cNHYUYYbCWm3ibJTGLndkOZwTSjqTGIskbONclVpFJvkIO9WywkNg6CS6NFOmjKRMhc2NSJPSRao0Dss4GkDcg1SYrmG55824L0LiEqkiAFt4YIsO2AGcbMas2nYd_5Q-8EhtJDuQBnDc60LR7eh1IbK2OV2UfHp91BG88XO3hX7HsFP_afAzBSy1_hI09QnetupO
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwzV3NbtQwEB6VcoAeKLSgLrTgA5yQt4njbOIDh2q3ZUu7PdBd6C3Ef-qqq-yqmwiVZ-FVeLeOnWRVAeJWiZsjje1o8nlmHI_nA3jLEy0NhrpU9riinIWWSo2PKMwVE6lWvhzD6Kw3nPBPF_HFGvxc3YUxxvjkM9N1TX-Wr-eqcr_K9tF1ItzaFMoTc_MdN2jLD8cD_JrvGDs6HPeHtOEQoDhTXFKRSCVCZqSO8gRdqeVK9wKTGKa1VGHSi0ObWkS6tUxwqyKlDQrrMBW5FIGKcNwH8BDjjJjVt8NWZxQ89fRlGKCENEXP2V7JCcT-uH_-ZYybT8ZwT4wRgmNTv-P2PI_LH8bfe7SjTfjV6qJOZLnqVqXsqh-_lYn8X5X1FJ40oTQ5qLH_DNZMsQWbLU0FaazWFmzcqbm4DfOBMQtyoPOFM_SzG3pYXPosCDKseaXI12l5SQZTZ09dnpATI-fYnOWO6I98rKbadSAY7pNJsawWztwucYC-Uz0dzTW-1WfPVIbL-DlM7kUJL2C9mBdmB4ix3Ng4ESoPOI-ZThWLo9wGIlfW5GHQgbAFRaaakuyOGWSW-a1ZIDIPpMwBKWuA1IH3qz6LuiDJP6W3HTJWkg0oOrDbYi9rbNYyY0ldfi-IXv691xt4NByPTrPT47OTV_DYzVOnNe7CenldmT0Mz0r52q8SAt_uG2m3AJhLkA
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Deep+Adaptively-Enhanced+Hashing+With+Discriminative+Similarity+Guidance+for+Unsupervised+Cross-Modal+Retrieval&rft.jtitle=IEEE+transactions+on+circuits+and+systems+for+video+technology&rft.au=Shi%2C+Yufeng&rft.au=Zhao%2C+Yue&rft.au=Liu%2C+Xin&rft.au=Zheng%2C+Feng&rft.date=2022-10-01&rft.issn=1051-8215&rft.eissn=1558-2205&rft.volume=32&rft.issue=10&rft.spage=7255&rft.epage=7268&rft_id=info:doi/10.1109%2FTCSVT.2022.3172716&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TCSVT_2022_3172716
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1051-8215&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1051-8215&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1051-8215&client=summon