Triplet-Based Deep Hashing Network for Cross-Modal Retrieval
Given the benefits of its low storage requirements and high retrieval efficiency, hashing has recently received increasing attention. In particular, cross-modal hashing has been widely and successfully used in multimedia similarity search applications. However, almost all existing methods employing...
Saved in:
Published in | IEEE transactions on image processing Vol. 27; no. 8; pp. 3893 - 3903 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
United States
IEEE
01.08.2018
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Given the benefits of its low storage requirements and high retrieval efficiency, hashing has recently received increasing attention. In particular, cross-modal hashing has been widely and successfully used in multimedia similarity search applications. However, almost all existing methods employing cross-modal hashing cannot obtain powerful hash codes due to their ignoring the relative similarity between heterogeneous data that contains richer semantic information, leading to unsatisfactory retrieval performance. In this paper, we propose a triplet-based deep hashing (TDH) network for cross-modal retrieval. First, we utilize the triplet labels, which describe the relative relationships among three instances as supervision in order to capture more general semantic correlations between cross-modal instances. We then establish a loss function from the inter-modal view and the intra-modal view to boost the discriminative abilities of the hash codes. Finally, graph regularization is introduced into our proposed TDH method to preserve the original semantic similarity between hash codes in Hamming space. Experimental results show that our proposed method outperforms several state-of-the-art approaches on two popular cross-modal data sets. |
---|---|
AbstractList | Given the benefits of its low storage requirements and high retrieval efficiency, hashing has recently received increasing attention. In particular, cross-modal hashing has been widely and successfully used in multimedia similarity search applications. However, almost all existing methods employing cross-modal hashing cannot obtain powerful hash codes due to their ignoring the relative similarity between heterogeneous data that contains richer semantic information, leading to unsatisfactory retrieval performance. In this paper, we propose a triplet-based deep hashing (TDH) network for cross-modal retrieval. First, we utilize the triplet labels, which describe the relative relationships among three instances as supervision in order to capture more general semantic correlations between cross-modal instances. We then establish a loss function from the inter-modal view and the intra-modal view to boost the discriminative abilities of the hash codes. Finally, graph regularization is introduced into our proposed TDH method to preserve the original semantic similarity between hash codes in Hamming space. Experimental results show that our proposed method outperforms several state-of-the-art approaches on two popular cross-modal data sets. Given the benefits of its low storage requirements and high retrieval efficiency, hashing has recently received increasing attention. In particular, cross-modal hashing has been widely and successfully used in multimedia similarity search applications. However, almost all existing methods employing cross-modal hashing cannot obtain powerful hash codes due to their ignoring the relative similarity between heterogeneous data that contains richer semantic information, leading to unsatisfactory retrieval performance. In this paper, we propose a tripletbased deep hashing (TDH) network for cross-modal retrieval. First, we utilize the triplet labels, which describes the relative relationships among three instances as supervision in order to capture more general semantic correlations between cross-modal instances. We then establish a loss function from the inter-modal view and the intra-modal view to boost the discriminative abilities of the hash codes. Finally, graph regularization is introduced into our proposed TDH method to preserve the original semantic similarity between hash codes in Hamming space. Experimental results show that our proposed method outperforms several state-of-the-art approaches on two popular cross-modal datasets.Given the benefits of its low storage requirements and high retrieval efficiency, hashing has recently received increasing attention. In particular, cross-modal hashing has been widely and successfully used in multimedia similarity search applications. However, almost all existing methods employing cross-modal hashing cannot obtain powerful hash codes due to their ignoring the relative similarity between heterogeneous data that contains richer semantic information, leading to unsatisfactory retrieval performance. In this paper, we propose a tripletbased deep hashing (TDH) network for cross-modal retrieval. First, we utilize the triplet labels, which describes the relative relationships among three instances as supervision in order to capture more general semantic correlations between cross-modal instances. We then establish a loss function from the inter-modal view and the intra-modal view to boost the discriminative abilities of the hash codes. Finally, graph regularization is introduced into our proposed TDH method to preserve the original semantic similarity between hash codes in Hamming space. Experimental results show that our proposed method outperforms several state-of-the-art approaches on two popular cross-modal datasets. Given the benefits of its low storage requirements and high retrieval efficiency, hashing has recently received increasing attention. In particular, cross-modal hashing has been widely and successfully used in multimedia similarity search applications. However, almost all existing methods employing cross-modal hashing cannot obtain powerful hash codes due to their ignoring the relative similarity between heterogeneous data that contains richer semantic information, leading to unsatisfactory retrieval performance. In this paper, we propose a tripletbased deep hashing (TDH) network for cross-modal retrieval. First, we utilize the triplet labels, which describes the relative relationships among three instances as supervision in order to capture more general semantic correlations between cross-modal instances. We then establish a loss function from the inter-modal view and the intra-modal view to boost the discriminative abilities of the hash codes. Finally, graph regularization is introduced into our proposed TDH method to preserve the original semantic similarity between hash codes in Hamming space. Experimental results show that our proposed method outperforms several state-of-the-art approaches on two popular cross-modal datasets. |
Author | Zhaojia Chen Cheng Deng Xinbo Gao Dacheng Tao Xianglong Liu |
Author_xml | – sequence: 1 givenname: Cheng orcidid: 0000-0003-2620-3247 surname: Deng fullname: Deng, Cheng – sequence: 2 givenname: Zhaojia surname: Chen fullname: Chen, Zhaojia – sequence: 3 givenname: Xianglong orcidid: 0000-0001-8425-4195 surname: Liu fullname: Liu, Xianglong – sequence: 4 givenname: Xinbo orcidid: 0000-0003-1443-0776 surname: Gao fullname: Gao, Xinbo – sequence: 5 givenname: Dacheng orcidid: 0000-0001-7225-5449 surname: Tao fullname: Tao, Dacheng |
BackLink | https://www.ncbi.nlm.nih.gov/pubmed/29993656$$D View this record in MEDLINE/PubMed |
BookMark | eNp9kDtPwzAUhS1URB-wIyGhjCwpvnbi2BILlEeRykOozJbj3oAhTYqdgvj3pGrLwMB07_B9RzqnTzpVXSEhh0CHAFSdTm8fh4yCHDLJQDHYIT1QCcSUJqzT_jTN4gwS1SX9EN4ohSQFsUe6TCnFRSp65Gzq3aLEJr4wAWfRJeIiGpvw6qqX6B6br9q_R0Xto5GvQ4jv6pkpoydsvMNPU-6T3cKUAQ82d0Cer6-mo3E8ebi5HZ1PYstBNbFEEEiVzFJZcIsyUxwVmEyZxBaZyCVYykRR5LxIuEKO0uaCZcJmDHPDOR-Qk3XuwtcfSwyNnrtgsSxNhfUyaEaF5AkTLG3R4w26zOc40wvv5sZ_623jFhBrwK4aeSy0dY1pXF013rhSA9WraXU7rV5NqzfTtiL9I26z_1GO1opDxF9ccg6QCP4DvVOBzg |
CODEN | IIPRE4 |
CitedBy_id | crossref_primary_10_1016_j_neucom_2019_05_019 crossref_primary_10_1109_TBDATA_2019_2946616 crossref_primary_10_1049_ipr2_12176 crossref_primary_10_1007_s11263_020_01315_0 crossref_primary_10_1186_s13640_021_00577_z crossref_primary_10_1109_TCSVT_2021_3061265 crossref_primary_10_1016_j_neucom_2020_10_053 crossref_primary_10_1109_TNNLS_2022_3174970 crossref_primary_10_1109_TIP_2019_2913511 crossref_primary_10_1109_TMM_2024_3358995 crossref_primary_10_1109_ACCESS_2019_2903295 crossref_primary_10_1007_s11063_019_10053_5 crossref_primary_10_1109_TCYB_2020_2996684 crossref_primary_10_1109_TIP_2024_3359062 crossref_primary_10_1109_TPAMI_2022_3177356 crossref_primary_10_1007_s11042_021_11420_y crossref_primary_10_1016_j_knosys_2023_110922 crossref_primary_10_1109_ACCESS_2020_3011102 crossref_primary_10_1109_TPAMI_2021_3123315 crossref_primary_10_1109_TGRS_2019_2913004 crossref_primary_10_1109_TIP_2021_3064265 crossref_primary_10_1016_j_patcog_2020_107370 crossref_primary_10_1007_s12652_020_02177_7 crossref_primary_10_1109_TIP_2020_3020383 crossref_primary_10_1016_j_neucom_2020_10_042 crossref_primary_10_1109_TIP_2020_3036717 crossref_primary_10_1016_j_neucom_2021_08_090 crossref_primary_10_1109_TMM_2019_2892004 crossref_primary_10_1109_TNNLS_2019_2929068 crossref_primary_10_1016_j_jvcir_2019_03_024 crossref_primary_10_1016_j_patcog_2023_109934 crossref_primary_10_1016_j_aej_2020_02_034 crossref_primary_10_1016_j_neucom_2022_04_126 crossref_primary_10_1109_ACCESS_2024_3380019 crossref_primary_10_1145_3559758 crossref_primary_10_1007_s00521_023_09331_0 crossref_primary_10_1016_j_patcog_2023_109483 crossref_primary_10_1109_TMM_2023_3349075 crossref_primary_10_1109_JSTARS_2023_3284426 crossref_primary_10_1186_s13640_024_00639_y crossref_primary_10_1145_3243316 crossref_primary_10_1109_TIP_2021_3131042 crossref_primary_10_12677_CSA_2021_1110256 crossref_primary_10_1016_j_neucom_2019_07_082 crossref_primary_10_1145_3408317 crossref_primary_10_1109_TCSVT_2020_3017344 crossref_primary_10_1109_TMM_2022_3177901 crossref_primary_10_1145_3624016 crossref_primary_10_1109_TKDE_2024_3401050 crossref_primary_10_1007_s10278_024_01310_8 crossref_primary_10_1109_TIP_2021_3120038 crossref_primary_10_1109_TMM_2022_3141603 crossref_primary_10_1109_TPAMI_2023_3247939 crossref_primary_10_1145_3230709 crossref_primary_10_1016_j_neucom_2019_06_053 crossref_primary_10_1016_j_ins_2022_07_039 crossref_primary_10_1155_2021_9937061 crossref_primary_10_1016_j_cosrev_2020_100336 crossref_primary_10_1016_j_engappai_2022_105090 crossref_primary_10_1016_j_image_2020_116131 crossref_primary_10_1109_TKDE_2022_3153962 crossref_primary_10_1016_j_neucom_2024_128293 crossref_primary_10_1007_s11042_020_09599_7 crossref_primary_10_1109_TIP_2019_2891895 crossref_primary_10_1109_ACCESS_2019_2922738 crossref_primary_10_1109_TMM_2020_2994509 crossref_primary_10_1109_TNNLS_2018_2885854 crossref_primary_10_1109_TIP_2020_3042086 crossref_primary_10_7717_peerj_cs_552 crossref_primary_10_1109_TIP_2023_3240863 crossref_primary_10_1109_TPAMI_2024_3392763 crossref_primary_10_3390_app14010093 crossref_primary_10_1016_j_neucom_2020_04_037 crossref_primary_10_1109_ACCESS_2018_2860785 crossref_primary_10_1007_s13042_024_02477_w crossref_primary_10_1109_TCSVT_2022_3172716 crossref_primary_10_1109_TNNLS_2021_3135420 crossref_primary_10_1016_j_neucom_2018_05_052 crossref_primary_10_1007_s11042_022_12395_0 crossref_primary_10_1016_j_patcog_2021_108084 crossref_primary_10_1109_TKDE_2020_2995195 crossref_primary_10_1007_s10489_023_05028_y crossref_primary_10_1109_TMM_2023_3318002 crossref_primary_10_3390_math10030430 crossref_primary_10_1186_s13640_019_0455_2 crossref_primary_10_1109_TCYB_2019_2928180 crossref_primary_10_1007_s11263_020_01363_6 crossref_primary_10_1109_TMM_2020_3002177 crossref_primary_10_1109_TMM_2019_2953375 crossref_primary_10_1016_j_compeleceng_2021_107262 crossref_primary_10_1109_TII_2024_3385102 crossref_primary_10_1007_s11042_024_18275_z crossref_primary_10_1109_TMM_2024_3521697 crossref_primary_10_1109_TCSVT_2022_3164230 crossref_primary_10_1109_TCSVT_2023_3285266 crossref_primary_10_1109_TMM_2023_3289765 crossref_primary_10_1109_TCYB_2020_3009004 crossref_primary_10_1109_TIP_2021_3083072 crossref_primary_10_1145_3532519 crossref_primary_10_1002_int_22853 crossref_primary_10_1109_TIP_2018_2890144 crossref_primary_10_1007_s10015_023_00867_x crossref_primary_10_1016_j_jvcir_2023_103807 crossref_primary_10_1109_TNNLS_2020_3018790 crossref_primary_10_1142_S0218194021500297 crossref_primary_10_1016_j_neucom_2021_01_073 crossref_primary_10_1145_3698400 crossref_primary_10_1109_TKDE_2023_3282921 crossref_primary_10_1109_TMI_2020_3046636 crossref_primary_10_3390_app11188769 crossref_primary_10_1016_j_neucom_2020_03_032 crossref_primary_10_1109_TMM_2023_3256092 crossref_primary_10_1016_j_neucom_2018_06_071 crossref_primary_10_1109_TCYB_2020_2985716 crossref_primary_10_1109_TIP_2020_3014727 crossref_primary_10_1145_3412847 crossref_primary_10_1109_LGRS_2021_3131592 crossref_primary_10_1007_s11042_019_7192_5 crossref_primary_10_1109_ACCESS_2023_3245074 crossref_primary_10_1109_TIP_2018_2869691 crossref_primary_10_1109_TIP_2024_3385656 crossref_primary_10_1109_TCSVT_2019_2947450 crossref_primary_10_1016_j_engappai_2019_02_018 crossref_primary_10_1109_ACCESS_2019_2926303 crossref_primary_10_1109_TIP_2020_3038354 crossref_primary_10_1016_j_neunet_2020_01_035 crossref_primary_10_1145_3643639 crossref_primary_10_1007_s11042_019_7343_8 crossref_primary_10_1109_TKDE_2021_3107489 crossref_primary_10_1007_s13042_021_01330_8 crossref_primary_10_1186_s13640_019_0442_7 crossref_primary_10_1109_TIP_2019_2903661 crossref_primary_10_1109_TKDE_2021_3102119 crossref_primary_10_1007_s00530_022_01005_6 crossref_primary_10_1016_j_image_2019_115650 crossref_primary_10_1016_j_jvcir_2021_103256 crossref_primary_10_1109_TBDATA_2019_2954516 crossref_primary_10_1016_j_ins_2021_03_006 crossref_primary_10_1109_TITS_2022_3221787 crossref_primary_10_1109_TIP_2020_2963957 crossref_primary_10_1016_j_knosys_2019_02_004 crossref_primary_10_1016_j_neucom_2019_07_023 crossref_primary_10_1109_TIE_2018_2870413 crossref_primary_10_1007_s11063_019_09987_7 crossref_primary_10_1016_j_neucom_2021_09_053 crossref_primary_10_1016_j_patcog_2020_107409 crossref_primary_10_1109_ACCESS_2018_2883463 crossref_primary_10_1109_TIP_2024_3485498 crossref_primary_10_3390_app112210803 crossref_primary_10_1109_TIP_2022_3195059 crossref_primary_10_1007_s00521_021_06696_y crossref_primary_10_1016_j_knosys_2021_107252 crossref_primary_10_1016_j_patcog_2022_109276 crossref_primary_10_1016_j_neucom_2020_03_019 crossref_primary_10_1109_TCYB_2021_3087632 crossref_primary_10_1109_TCYB_2021_3093626 crossref_primary_10_1109_JSTARS_2022_3191692 crossref_primary_10_1007_s00521_022_07962_3 crossref_primary_10_3390_app13031487 crossref_primary_10_1016_j_patcog_2023_110079 crossref_primary_10_1016_j_neucom_2019_08_050 crossref_primary_10_1007_s11227_022_04847_z crossref_primary_10_1007_s11042_024_19581_2 crossref_primary_10_1109_TMM_2020_3004962 crossref_primary_10_1016_j_neucom_2018_10_027 crossref_primary_10_1109_TIP_2022_3171081 crossref_primary_10_3233_IDA_226687 crossref_primary_10_1109_JIOT_2022_3162326 crossref_primary_10_1109_ACCESS_2025_3549781 crossref_primary_10_1109_ACCESS_2022_3204305 crossref_primary_10_1109_ACCESS_2023_3308931 crossref_primary_10_1145_3524021 crossref_primary_10_1016_j_patcog_2019_107033 crossref_primary_10_1016_j_image_2021_116146 crossref_primary_10_1109_TBIOM_2020_2983467 crossref_primary_10_1109_TCSVT_2020_3027001 crossref_primary_10_1016_j_ipm_2022_102919 crossref_primary_10_1109_TNNLS_2019_2935118 crossref_primary_10_1109_TMM_2020_2991513 crossref_primary_10_1016_j_knosys_2021_106857 crossref_primary_10_3390_app131810524 crossref_primary_10_1049_iet_cvi_2018_5162 crossref_primary_10_1016_j_neucom_2019_07_011 crossref_primary_10_1016_j_neucom_2021_03_090 crossref_primary_10_1016_j_neucom_2018_10_082 crossref_primary_10_1016_j_neucom_2020_06_036 crossref_primary_10_32604_csse_2021_014563 crossref_primary_10_1109_TNNLS_2018_2844464 crossref_primary_10_1109_ACCESS_2020_3006585 crossref_primary_10_1109_TCSVT_2022_3186714 crossref_primary_10_1007_s10489_020_01797_y crossref_primary_10_1109_JPROC_2024_3525147 crossref_primary_10_1109_TCYB_2020_3027614 crossref_primary_10_1109_TMM_2021_3097506 crossref_primary_10_1109_ACCESS_2020_3023592 crossref_primary_10_1145_3356338 crossref_primary_10_1109_ACCESS_2019_2920712 crossref_primary_10_1007_s11042_023_15535_2 crossref_primary_10_1016_j_ipm_2021_102648 crossref_primary_10_1109_TNNLS_2020_2967597 crossref_primary_10_1016_j_knosys_2024_111837 crossref_primary_10_1109_ACCESS_2024_3444817 crossref_primary_10_1109_TMM_2019_2922128 crossref_primary_10_1109_TIP_2022_3204213 crossref_primary_10_1109_TMM_2022_3152086 crossref_primary_10_1109_TPAMI_2023_3291237 crossref_primary_10_1155_2021_5107034 crossref_primary_10_1016_j_patcog_2019_01_010 crossref_primary_10_1016_j_patcog_2020_107331 crossref_primary_10_1109_TIP_2018_2866688 crossref_primary_10_1109_TMM_2022_3140656 crossref_primary_10_1142_S021800142150018X crossref_primary_10_1016_j_knosys_2020_106188 crossref_primary_10_1016_j_inffus_2023_101968 crossref_primary_10_1016_j_dss_2022_113863 crossref_primary_10_1109_TCSVT_2023_3281868 crossref_primary_10_1007_s11263_024_02064_0 crossref_primary_10_1109_TCYB_2018_2882908 crossref_primary_10_1016_j_patcog_2020_107335 crossref_primary_10_1016_j_patrec_2018_06_024 crossref_primary_10_1109_TIP_2018_2848470 crossref_primary_10_1007_s13735_025_00353_z crossref_primary_10_1109_TMM_2019_2922130 crossref_primary_10_1016_j_patrec_2019_04_017 crossref_primary_10_1109_TDSC_2021_3050435 crossref_primary_10_3390_sym12050689 crossref_primary_10_1145_3631356 crossref_primary_10_1016_j_neucom_2019_12_078 crossref_primary_10_1007_s11042_020_09983_3 crossref_primary_10_3390_s22082921 crossref_primary_10_1109_ACCESS_2021_3052605 crossref_primary_10_1109_TIP_2020_3048680 crossref_primary_10_3390_app15063068 crossref_primary_10_3934_math_2021277 crossref_primary_10_1109_ACCESS_2019_2908043 crossref_primary_10_1109_TNNLS_2020_2965992 crossref_primary_10_1109_TCYB_2019_2955130 crossref_primary_10_1109_TNNLS_2019_2910146 crossref_primary_10_1016_j_neucom_2021_01_107 crossref_primary_10_1109_TKDE_2022_3218656 crossref_primary_10_1145_3355394 crossref_primary_10_1109_ACCESS_2023_3310819 crossref_primary_10_1109_TIP_2020_2967584 crossref_primary_10_1016_j_neucom_2019_12_086 crossref_primary_10_1109_TPAMI_2018_2861000 crossref_primary_10_1016_j_asoc_2022_109935 crossref_primary_10_1109_TNNLS_2024_3381347 crossref_primary_10_1109_TMM_2023_3254199 crossref_primary_10_1109_TCYB_2020_2964993 crossref_primary_10_1016_j_patcog_2021_108264 crossref_primary_10_1016_j_patcog_2018_05_023 crossref_primary_10_1007_s10489_024_06060_2 crossref_primary_10_1016_j_knosys_2021_106818 crossref_primary_10_1142_S0218001424510170 crossref_primary_10_1007_s11227_022_04784_x crossref_primary_10_1109_TMM_2024_3369904 crossref_primary_10_2478_aut_2020_0063 crossref_primary_10_1016_j_neucom_2018_07_031 crossref_primary_10_1016_j_neucom_2019_01_083 crossref_primary_10_1145_3649447 |
Cites_doi | 10.1109/CVPR.2015.7298947 10.1145/1873951.1873987 10.1145/2600428.2609610 10.1109/TIP.2016.2553446 10.1109/TIP.2016.2564638 10.1109/TIP.2016.2627801 10.1109/CVPR.2015.7298594 10.1109/CVPR.2012.6247923 10.1109/TIP.2016.2607421 10.1145/1646396.1646452 10.1016/j.patcog.2013.08.022 10.1145/2939672.2939812 10.1109/TIP.2016.2593344 10.1109/CVPR.2015.7299011 10.1145/2339530.2339678 10.1109/TIP.2015.2467315 10.1109/TNNLS.2014.2357794 10.1109/TCSVT.2017.2705068 10.1109/TPAMI.2015.2505311 10.1631/FITEE.1601787 10.1109/TNNLS.2015.2461554 10.1109/CVPR.2010.5539928 10.1109/CVPR.2014.275 10.1145/2463676.2465274 10.1109/TIP.2017.2676345 10.1109/CVPR.2016.641 10.1109/TMM.2015.2508146 |
ContentType | Journal Article |
DBID | 97E RIA RIE AAYXX CITATION NPM 7X8 |
DOI | 10.1109/TIP.2018.2821921 |
DatabaseName | IEEE Xplore (IEEE) IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef PubMed MEDLINE - Academic |
DatabaseTitle | CrossRef PubMed MEDLINE - Academic |
DatabaseTitleList | MEDLINE - Academic PubMed |
Database_xml | – sequence: 1 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: RIE name: IEEE Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Applied Sciences Engineering |
EISSN | 1941-0042 |
EndPage | 3903 |
ExternalDocumentID | 29993656 10_1109_TIP_2018_2821921 8331146 |
Genre | orig-research Journal Article |
GrantInformation_xml | – fundername: National Natural Science Foundation of China grantid: 61572388; 61703327 funderid: 10.13039/501100001809 – fundername: Key Research and Development Program-The Key Industry Innovation Chain of Shaanxi grantid: 2017ZDCXL-GY-05-04-02; 2017ZDCXLGY-05-04-02 – fundername: Australian Research Council Projects grantid: FL-170100117; DP-180103424; LP-150100671 funderid: 10.13039/501100000923 |
GroupedDBID | --- -~X .DC 0R~ 29I 4.4 53G 5GY 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABFSI ABQJQ ABVLG ACGFO ACGFS ACIWK AENEX AETIX AGQYO AGSQL AHBIQ AI. AIBXA AKJIK AKQYR ALLEH ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 E.L EBS EJD F5P HZ~ H~9 ICLAB IFIPE IFJZH IPLJI JAVBF LAI M43 MS~ O9- OCL P2P RIA RIE RNS TAE TN5 VH1 AAYOK AAYXX CITATION RIG NPM PKN Z5M 7X8 |
ID | FETCH-LOGICAL-c319t-8e16e098758f3ce8793e91a79a4cf76b81c026ffb3f439e3e8cb6276c72eba333 |
IEDL.DBID | RIE |
ISSN | 1057-7149 1941-0042 |
IngestDate | Thu Jul 10 20:48:36 EDT 2025 Wed Feb 19 02:31:52 EST 2025 Tue Jul 01 02:03:17 EDT 2025 Thu Apr 24 22:54:54 EDT 2025 Wed Aug 27 02:49:00 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 8 |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c319t-8e16e098758f3ce8793e91a79a4cf76b81c026ffb3f439e3e8cb6276c72eba333 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
ORCID | 0000-0001-8425-4195 0000-0003-2620-3247 0000-0001-7225-5449 0000-0003-1443-0776 |
PMID | 29993656 |
PQID | 2068342625 |
PQPubID | 23479 |
PageCount | 11 |
ParticipantIDs | crossref_citationtrail_10_1109_TIP_2018_2821921 crossref_primary_10_1109_TIP_2018_2821921 ieee_primary_8331146 pubmed_primary_29993656 proquest_miscellaneous_2068342625 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2018-Aug. 2018-8-00 2018-08-00 20180801 |
PublicationDateYYYYMMDD | 2018-08-01 |
PublicationDate_xml | – month: 08 year: 2018 text: 2018-Aug. |
PublicationDecade | 2010 |
PublicationPlace | United States |
PublicationPlace_xml | – name: United States |
PublicationTitle | IEEE transactions on image processing |
PublicationTitleAbbrev | TIP |
PublicationTitleAlternate | IEEE Trans Image Process |
PublicationYear | 2018 |
Publisher | IEEE |
Publisher_xml | – name: IEEE |
References | ref13 andrea (ref43) 2015 ref12 ref15 ref14 ref30 russakovsky (ref35) 2015 krizhevsky (ref37) 2012 ref11 ref10 jiang (ref24) 2016 ref2 ref1 ref17 akaho (ref3) 2006 ref38 ref16 ref19 ref18 kumar (ref32) 2011 wei (ref42) 2014 chatfield (ref34) 2014 wang (ref27) 2016 wang (ref39) 2015 ref23 ref26 zhang (ref33) 2014 ref20 ref41 ref22 ref21 yang (ref25) 2017 ref28 krizhevsky (ref36) 2012; 25 ref29 ref8 ref7 ref9 ref4 zhai (ref6) 2013 ref5 zhao (ref31) 2015 huiskes (ref40) 2008 |
References_xml | – start-page: 1360 year: 2011 ident: ref32 article-title: Learning hash functions for cross-view similarity search publication-title: Proc Int Joint Conf Artif Intell – year: 2006 ident: ref3 publication-title: A kernel method for canonical correlation analysis – ident: ref30 doi: 10.1109/CVPR.2015.7298947 – ident: ref4 doi: 10.1145/1873951.1873987 – ident: ref21 doi: 10.1145/2600428.2609610 – start-page: 1618 year: 2017 ident: ref25 article-title: Pairwise relationship guided deep hashing for cross-modal retrieval publication-title: Proc 31st AAAI Conf Artif Intell – ident: ref11 doi: 10.1109/TIP.2016.2553446 – ident: ref16 doi: 10.1109/TIP.2016.2564638 – start-page: 3419 year: 2014 ident: ref42 article-title: Discrete graph hashing publication-title: Proc Adv Neural Inf Process Syst – ident: ref13 doi: 10.1109/TIP.2016.2627801 – ident: ref38 doi: 10.1109/CVPR.2015.7298594 – ident: ref5 doi: 10.1109/CVPR.2012.6247923 – ident: ref20 doi: 10.1109/TIP.2016.2607421 – volume: 25 start-page: 1097 year: 2012 ident: ref36 article-title: ImageNet classification with deep convolutional neural networks publication-title: Proc Adv Neural Inf Process Syst (NIPS) – ident: ref41 doi: 10.1145/1646396.1646452 – ident: ref12 doi: 10.1016/j.patcog.2013.08.022 – start-page: 689 year: 2015 ident: ref43 article-title: MatConvNet: Convolutional neural networks for MATLAB publication-title: Proc 23rd ACM Int Conf Multimedia Inf Retr – ident: ref26 doi: 10.1145/2939672.2939812 – ident: ref15 doi: 10.1109/TIP.2016.2593344 – start-page: 1198 year: 2013 ident: ref6 article-title: Heterogeneous metric learning with joint graph regularization for cross-media retrieval publication-title: Proc 27th AAAI Conf Artif Intell – ident: ref23 doi: 10.1109/CVPR.2015.7299011 – ident: ref19 doi: 10.1145/2339530.2339678 – ident: ref29 doi: 10.1109/TIP.2015.2467315 – ident: ref9 doi: 10.1109/TNNLS.2014.2357794 – year: 2015 ident: ref35 publication-title: Imagenet Large Scale Visual Recognition Challenge – ident: ref2 doi: 10.1109/TCSVT.2017.2705068 – ident: ref7 doi: 10.1109/TPAMI.2015.2505311 – ident: ref1 doi: 10.1631/FITEE.1601787 – year: 2016 ident: ref27 publication-title: Deep supervised hashing with triplet labels – start-page: 1556 year: 2015 ident: ref31 article-title: Deep semantic ranking based hashing for multi-label image retrieval publication-title: Proc IEEE Conf Comput Vis Pattern Recognit – ident: ref10 doi: 10.1109/TNNLS.2015.2461554 – start-page: 3890 year: 2015 ident: ref39 article-title: Semantic topic multimodal hashing for cross-media retrieval publication-title: Proc 24th AAAI Conf Artif Intell – year: 2016 ident: ref24 publication-title: Deep cross-modal hashing – year: 2014 ident: ref34 publication-title: Return of the devil in the details Delving deep into convolutional nets – ident: ref22 doi: 10.1109/CVPR.2010.5539928 – start-page: 1097 year: 2012 ident: ref37 article-title: Imagenet classification with deep convolutional neural networks publication-title: Proc Adv Neural Inf Process Syst – start-page: 39 year: 2008 ident: ref40 article-title: The mir flickr retrieval evaluation publication-title: Proc 1st ACM Int Conf Multimedia Inf Retr – ident: ref14 doi: 10.1109/CVPR.2014.275 – start-page: 2177 year: 2014 ident: ref33 article-title: Large-scale supervised multimodal hashing with semantic correlation maximization publication-title: Proc 28th AAAI Conf Artif Intell – ident: ref18 doi: 10.1145/2463676.2465274 – ident: ref17 doi: 10.1109/TIP.2017.2676345 – ident: ref28 doi: 10.1109/CVPR.2016.641 – ident: ref8 doi: 10.1109/TMM.2015.2508146 |
SSID | ssj0014516 |
Score | 2.659179 |
Snippet | Given the benefits of its low storage requirements and high retrieval efficiency, hashing has recently received increasing attention. In particular,... |
SourceID | proquest pubmed crossref ieee |
SourceType | Aggregation Database Index Database Enrichment Source Publisher |
StartPage | 3893 |
SubjectTerms | Correlation cross-modal retrieval Deep neural network graph regularization hashing Indexes Internet Multimedia communication Neural networks Semantics Training data triplet labels |
Title | Triplet-Based Deep Hashing Network for Cross-Modal Retrieval |
URI | https://ieeexplore.ieee.org/document/8331146 https://www.ncbi.nlm.nih.gov/pubmed/29993656 https://www.proquest.com/docview/2068342625 |
Volume | 27 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LT8MwDLbGTnBgsPEoLxWJCxLZo1mbVOLCaxpIIISGxK1KMvcCWqetu_DrcdKuAgSIWw95NHEsf46dzwAnSgeh5jxgGGrN-oorFnNEZvS4l9rytoFy2RYP0fC5f_cSvtTgrHoLg4gu-Qzb9tPF8seZWdirso7k3D6iXYEVctyKt1pVxMAWnHWRzVAwQbB_GZLsxp3R7aPN4ZJtci8s_dcXE-RqqvwOL52ZGTTgfvmDRXbJa3uR67Z5_8bd-N8VbMB6iTf9i-KAbEINJ01olNjTLzV73oS1T8SELTgfzewFfM4uyciN_WvEqT8syi75D0XiuE9o17-yC2T32ZimeHK1uejgbsHz4GZ0NWRlnQVmSAFzJrEXYTcmz0Wm3KAklcW4p0Ss-iYVkZY9Q55ammqeEnxBjtLoKBCREQFqxTnfhvokm-Au-FKpvohFIC0RXYipjAQ3RqfkxgWBCLUHneXWJ6YkIbe1MN4S54x044SElVhhJaWwPDitekwLAo4_2rbsllftyt324Hgp3YSUx0ZE1ASzxZw6R5JbTv7Qg51C7FVnstMxJ7S79_Og-7Bqpy5yAQ-gns8WeEj4JNdH7mB-ALHi3sE |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LT8MwDLZ4HIADj_EazyJxQSLb2qxNKnHhNW2wTQgNiVuVZO4FtCHoLvx6nLSrAAHi1kOSJnEsf44dfwDHSgeh5jxgGGrNmoorFnNEZvTQTy29baBctkU_aj80bx7Dxxk4Ld_CIKJLPsOa_XSx_OHYTOxVWV1ybh_RzsI82f3Qz19rlTEDSznrYpuhYIKA_zQo2Yjrg86dzeKSNXIwbAGwL0bIsar8DjCdoWmtQG86xTy_5Kk2yXTNvH-r3vjfNazCcoE4vfP8iKzBDI4qsFKgT6_Q7bcKLH0qTbgOZ4NXewWfsQsyc0PvCvHFa-fES14_Tx33CO96l3aBrDce0i_uHTsXHd0NeGhdDy7brGBaYIZUMGMS_QgbMfkuMuUGJSktxr4SsWqaVERa-oZ8tTTVPCUAgxyl0VEgIiMC1Ipzvglzo_EIt8GTSjVFLAJpS9GFmMpIcGN0So5cEIhQV6E-3frEFGXILRvGc-LckUackLASK6ykEFYVTsoeL3kJjj_artstL9sVu12Fo6l0E1IfGxNRIxxP3qhzJLmtyh9WYSsXe9mZLHXMCe_u_DzoISy0B71u0u30b3dh0U4jzwzcg7nsdYL7hFYyfeAO6Qd8ZeIK |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Triplet-Based+Deep+Hashing+Network+for+Cross-Modal+Retrieval&rft.jtitle=IEEE+transactions+on+image+processing&rft.au=Deng%2C+Cheng&rft.au=Chen%2C+Zhaojia&rft.au=Liu%2C+Xianglong&rft.au=Gao%2C+Xinbo&rft.date=2018-08-01&rft.issn=1057-7149&rft.eissn=1941-0042&rft.volume=27&rft.issue=8&rft.spage=3893&rft.epage=3903&rft_id=info:doi/10.1109%2FTIP.2018.2821921&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TIP_2018_2821921 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1057-7149&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1057-7149&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1057-7149&client=summon |