Selective Convolutional Descriptor Aggregation for Fine-Grained Image Retrieval
Deep convolutional neural network models pre-trained for the ImageNet classification task have been successfully adopted to tasks in other domains, such as texture description and object proposal generation, but these tasks require annotations for images in the new domain. In this paper, we focus on...
Saved in:
Published in | IEEE transactions on image processing Vol. 26; no. 6; pp. 2868 - 2881 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
United States
IEEE
01.06.2017
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Deep convolutional neural network models pre-trained for the ImageNet classification task have been successfully adopted to tasks in other domains, such as texture description and object proposal generation, but these tasks require annotations for images in the new domain. In this paper, we focus on a novel and challenging task in the pure unsupervised setting: fine-grained image retrieval. Even with image labels, fine-grained images are difficult to classify, letting alone the unsupervised retrieval task. We propose the selective convolutional descriptor aggregation (SCDA) method. The SCDA first localizes the main object in fine-grained images, a step that discards the noisy background and keeps useful deep descriptors. The selected descriptors are then aggregated and the dimensionality is reduced into a short feature vector using the best practices we found. The SCDA is unsupervised, using no image label or bounding box annotation. Experiments on six fine-grained data sets confirm the effectiveness of the SCDA for fine-grained image retrieval. Besides, visualization of the SCDA features shows that they correspond to visual attributes (even subtle ones), which might explain SCDA's high-mean average precision in fine-grained retrieval. Moreover, on general image retrieval data sets, the SCDA achieves comparable retrieval results with the state-of-the-art general image retrieval approaches. |
---|---|
AbstractList | Deep convolutional neural network models pre-trained for the ImageNet classification task have been successfully adopted to tasks in other domains, such as texture description and object proposal generation, but these tasks require annotations for images in the new domain. In this paper, we focus on a novel and challenging task in the pure unsupervised setting: fine-grained image retrieval. Even with image labels, fine-grained images are difficult to classify, letting alone the unsupervised retrieval task. We propose the selective convolutional descriptor aggregation (SCDA) method. The SCDA first localizes the main object in fine-grained images, a step that discards the noisy background and keeps useful deep descriptors. The selected descriptors are then aggregated and the dimensionality is reduced into a short feature vector using the best practices we found. The SCDA is unsupervised, using no image label or bounding box annotation. Experiments on six fine-grained data sets confirm the effectiveness of the SCDA for fine-grained image retrieval. Besides, visualization of the SCDA features shows that they correspond to visual attributes (even subtle ones), which might explain SCDA's high-mean average precision in fine-grained retrieval. Moreover, on general image retrieval data sets, the SCDA achieves comparable retrieval results with the state-of-the-art general image retrieval approaches. Deep convolutional neural network models pre-trained for the ImageNet classification task have been successfully adopted to tasks in other domains, such as texture description and object proposal generation, but these tasks require annotations for images in the new domain. In this paper, we focus on a novel and challenging task in the pure unsupervised setting: fine-grained image retrieval. Even with image labels, fine-grained images are difficult to classify, letting alone the unsupervised retrieval task. We propose the selective convolutional descriptor aggregation (SCDA) method. The SCDA first localizes the main object in fine-grained images, a step that discards the noisy background and keeps useful deep descriptors. The selected descriptors are then aggregated and the dimensionality is reduced into a short feature vector using the best practices we found. The SCDA is unsupervised, using no image label or bounding box annotation. Experiments on six fine-grained data sets confirm the effectiveness of the SCDA for fine-grained image retrieval. Besides, visualization of the SCDA features shows that they correspond to visual attributes (even subtle ones), which might explain SCDA's high-mean average precision in fine-grained retrieval. Moreover, on general image retrieval data sets, the SCDA achieves comparable retrieval results with the state-of-the-art general image retrieval approaches.Deep convolutional neural network models pre-trained for the ImageNet classification task have been successfully adopted to tasks in other domains, such as texture description and object proposal generation, but these tasks require annotations for images in the new domain. In this paper, we focus on a novel and challenging task in the pure unsupervised setting: fine-grained image retrieval. Even with image labels, fine-grained images are difficult to classify, letting alone the unsupervised retrieval task. We propose the selective convolutional descriptor aggregation (SCDA) method. The SCDA first localizes the main object in fine-grained images, a step that discards the noisy background and keeps useful deep descriptors. The selected descriptors are then aggregated and the dimensionality is reduced into a short feature vector using the best practices we found. The SCDA is unsupervised, using no image label or bounding box annotation. Experiments on six fine-grained data sets confirm the effectiveness of the SCDA for fine-grained image retrieval. Besides, visualization of the SCDA features shows that they correspond to visual attributes (even subtle ones), which might explain SCDA's high-mean average precision in fine-grained retrieval. Moreover, on general image retrieval data sets, the SCDA achieves comparable retrieval results with the state-of-the-art general image retrieval approaches. |
Author | Jian-Hao Luo Xiu-Shen Wei Jianxin Wu Zhi-Hua Zhou |
Author_xml | – sequence: 1 givenname: Xiu-Shen surname: Wei fullname: Wei, Xiu-Shen – sequence: 2 givenname: Jian-Hao surname: Luo fullname: Luo, Jian-Hao – sequence: 3 givenname: Jianxin orcidid: 0000-0002-2085-7568 surname: Wu fullname: Wu, Jianxin – sequence: 4 givenname: Zhi-Hua surname: Zhou fullname: Zhou, Zhi-Hua |
BackLink | https://www.ncbi.nlm.nih.gov/pubmed/28368819$$D View this record in MEDLINE/PubMed |
BookMark | eNp9kM9PwjAUxxuDEUHvJiZmRy_Dvq2s7ZGgIAkJRvG8dOVtqdkPbAeJ_72dIAcPnl5_fL7vtZ8B6dVNjYTcAB0BUPmwXryMIgp8FCVCQByfkUuQDEJKWdTzazrmIQcm-2Tg3AelwMaQXJB-JOIuIC_J6g1L1K3ZYzBt6n1T7lrT1KoMHtFpa7ZtY4NJUVgsVHcR5H4_MzWGc6t82QSLShUYvGJrDe5VeUXOc1U6vD7WIXmfPa2nz-FyNV9MJ8tQxyDbkLOEQSIx34wzIXU-ptK_TuUZY1JThI3k_ltUK6Z4xhN_BhlmoHUGGVAq4iG5P_Td2uZzh65NK-M0lqWqsdm5FITwAzgT0qN3R3SXVbhJt9ZUyn6lvxI8QA-Ato1zFvMTAjTtPKfec9p5To-efST5E9Gm_THUei_lf8HbQ9Ag4mkOF4LziMbfdP2Jjw |
CODEN | IIPRE4 |
CitedBy_id | crossref_primary_10_1016_j_image_2021_116515 crossref_primary_10_1007_s00371_024_03335_0 crossref_primary_10_3390_e22040419 crossref_primary_10_1007_s00779_023_01730_3 crossref_primary_10_1049_iet_cvi_2018_5325 crossref_primary_10_1109_TIP_2018_2867104 crossref_primary_10_1109_TIP_2019_2901407 crossref_primary_10_1145_3314051 crossref_primary_10_3389_fpls_2023_1091600 crossref_primary_10_1049_iet_cvi_2018_5206 crossref_primary_10_1145_3659943 crossref_primary_10_1016_j_jvcir_2021_103418 crossref_primary_10_1007_s00138_023_01379_1 crossref_primary_10_1016_j_scitotenv_2023_163786 crossref_primary_10_1016_j_jvcir_2021_103414 crossref_primary_10_1007_s13042_023_02057_4 crossref_primary_10_1109_TCSVT_2023_3284405 crossref_primary_10_1109_ACCESS_2020_3011102 crossref_primary_10_3233_JIFS_223434 crossref_primary_10_1016_j_neucom_2023_03_035 crossref_primary_10_1016_j_patrec_2024_04_005 crossref_primary_10_1049_iet_cvi_2017_0155 crossref_primary_10_1002_mp_16144 crossref_primary_10_1016_j_patcog_2023_110248 crossref_primary_10_1109_LSP_2020_3048638 crossref_primary_10_1007_s00530_022_00899_6 crossref_primary_10_1016_j_eswa_2021_116014 crossref_primary_10_1016_j_eswa_2023_122016 crossref_primary_10_1142_S0218001424560044 crossref_primary_10_1587_transinf_2021EDP7094 crossref_primary_10_1587_transinf_2019EDL8204 crossref_primary_10_1007_s13755_023_00266_3 crossref_primary_10_1109_TCSVT_2021_3080920 crossref_primary_10_1016_j_cviu_2024_104201 crossref_primary_10_1016_j_bspc_2021_103120 crossref_primary_10_1007_s13042_018_0898_2 crossref_primary_10_1007_s11042_020_09274_x crossref_primary_10_1007_s00521_024_09501_8 crossref_primary_10_1109_TCYB_2020_2995496 crossref_primary_10_1109_TIP_2021_3131042 crossref_primary_10_1016_j_neucom_2019_07_082 crossref_primary_10_1109_TIFS_2019_2959921 crossref_primary_10_3390_app13074453 crossref_primary_10_1109_TIP_2019_2891888 crossref_primary_10_1109_TNNLS_2020_3029613 crossref_primary_10_1016_j_neucom_2018_02_109 crossref_primary_10_1007_s10489_020_01665_9 crossref_primary_10_3788_LOP212616 crossref_primary_10_1007_s10489_022_03287_9 crossref_primary_10_1162_neco_a_01302 crossref_primary_10_2112_SI97_039_1 crossref_primary_10_1109_LGRS_2022_3233374 crossref_primary_10_26599_TST_2023_9010022 crossref_primary_10_1109_TCSVT_2020_2980283 crossref_primary_10_1007_s11042_022_13658_6 crossref_primary_10_1109_TPAMI_2024_3355461 crossref_primary_10_1109_TIP_2021_3126490 crossref_primary_10_1016_j_eswa_2023_122913 crossref_primary_10_1109_TPAMI_2023_3329498 crossref_primary_10_1007_s13198_021_01210_y crossref_primary_10_3390_s23156804 crossref_primary_10_1007_s11042_019_08260_2 crossref_primary_10_1016_j_neucom_2019_04_098 crossref_primary_10_1109_TMM_2019_2919469 crossref_primary_10_1109_TKDE_2024_3393512 crossref_primary_10_1109_TGRS_2024_3407857 crossref_primary_10_1016_j_neucom_2022_12_028 crossref_primary_10_1109_TAFFC_2017_2762299 crossref_primary_10_1016_j_neucom_2021_04_030 crossref_primary_10_1016_j_neucom_2021_09_016 crossref_primary_10_1109_TIP_2020_2986599 crossref_primary_10_1109_TCSVT_2018_2872503 crossref_primary_10_1109_TMM_2021_3090274 crossref_primary_10_1109_TIP_2021_3055062 crossref_primary_10_1016_j_cviu_2022_103408 crossref_primary_10_1016_j_imavis_2020_104008 crossref_primary_10_1109_TIP_2022_3145159 crossref_primary_10_1093_bib_bbaf003 crossref_primary_10_1109_ACCESS_2019_2960203 crossref_primary_10_1007_s11263_022_01613_9 crossref_primary_10_1109_ACCESS_2019_2936118 crossref_primary_10_1109_TNNLS_2020_3027589 crossref_primary_10_1007_s11432_023_3922_2 crossref_primary_10_1109_TBME_2023_3265033 crossref_primary_10_1109_ACCESS_2022_3167397 crossref_primary_10_1016_j_eswa_2025_126668 crossref_primary_10_1109_TIP_2020_2975918 crossref_primary_10_1109_TIP_2020_2996736 crossref_primary_10_1007_s11633_022_1404_6 crossref_primary_10_1007_s00521_021_06638_8 crossref_primary_10_1016_j_bspc_2024_107203 crossref_primary_10_1016_j_jvcir_2020_102860 crossref_primary_10_1049_iet_ipr_2020_0478 crossref_primary_10_1109_TPAMI_2018_2858232 crossref_primary_10_1088_1742_6596_2171_1_012036 crossref_primary_10_1117_1_JEI_28_1_013041 crossref_primary_10_1109_TIP_2019_2924811 crossref_primary_10_1109_TMM_2023_3279990 crossref_primary_10_1109_TPAMI_2021_3126648 crossref_primary_10_1016_j_neucom_2020_07_139 crossref_primary_10_1016_j_patcog_2021_108304 crossref_primary_10_1109_TGRS_2023_3309091 crossref_primary_10_1109_ACCESS_2019_2935011 crossref_primary_10_1109_ACCESS_2020_3018875 crossref_primary_10_1016_j_compag_2024_109104 crossref_primary_10_1109_ACCESS_2020_2970223 crossref_primary_10_32604_csse_2023_025293 crossref_primary_10_1016_j_patcog_2019_02_007 crossref_primary_10_1007_s13042_021_01330_8 crossref_primary_10_1016_j_patcog_2017_10_002 crossref_primary_10_1016_j_patcog_2023_109543 crossref_primary_10_1109_ACCESS_2019_2922416 crossref_primary_10_1109_TIP_2021_3094744 crossref_primary_10_1007_s00521_023_08787_4 crossref_primary_10_1016_j_jvcir_2019_01_029 crossref_primary_10_3390_plants12142701 crossref_primary_10_3390_e24020156 crossref_primary_10_1016_j_tust_2024_105692 crossref_primary_10_1109_TPAMI_2022_3141433 crossref_primary_10_1109_TPAMI_2023_3299563 crossref_primary_10_3389_fpls_2023_1150748 crossref_primary_10_1016_j_patcog_2022_108618 crossref_primary_10_1109_TCSS_2021_3067806 crossref_primary_10_1109_TIP_2021_3115658 crossref_primary_10_1109_TPAMI_2019_2933510 crossref_primary_10_1007_s10489_020_02107_2 crossref_primary_10_1007_s11063_023_11297_y crossref_primary_10_1016_j_jmapro_2021_07_046 crossref_primary_10_1109_TIP_2019_2908795 crossref_primary_10_1049_cvi2_12095 crossref_primary_10_3788_LOP213313 crossref_primary_10_1007_s10489_021_03096_6 crossref_primary_10_1109_TMM_2017_2710803 crossref_primary_10_1145_3636552 crossref_primary_10_1520_JTE20230038 crossref_primary_10_4218_etrij_2018_0621 crossref_primary_10_1109_TIP_2020_2971105 crossref_primary_10_3390_sym13010038 crossref_primary_10_54097_hset_v9i_1858 crossref_primary_10_1109_TMM_2017_2766842 crossref_primary_10_1109_ACCESS_2020_2966220 crossref_primary_10_1177_00405175211037186 crossref_primary_10_3390_math10152767 crossref_primary_10_1007_s10489_021_02573_2 crossref_primary_10_3390_s24134127 crossref_primary_10_1016_j_patcog_2022_108869 crossref_primary_10_1088_1742_6596_1533_3_032099 crossref_primary_10_3390_sym11081033 crossref_primary_10_3390_electronics11040639 crossref_primary_10_1109_TMC_2019_2944371 crossref_primary_10_3390_app9020301 crossref_primary_10_1145_3492221 crossref_primary_10_1109_ACCESS_2018_2839720 crossref_primary_10_3390_rs14205242 crossref_primary_10_1109_TIP_2019_2950796 crossref_primary_10_1109_TIP_2019_2921878 crossref_primary_10_1016_j_neucom_2020_04_092 crossref_primary_10_1007_s00371_020_02052_8 crossref_primary_10_1016_j_asoc_2020_106281 crossref_primary_10_1016_j_neucom_2017_12_020 crossref_primary_10_3390_rs13163113 crossref_primary_10_1109_TCSVT_2023_3263870 crossref_primary_10_1109_TIP_2019_2921861 crossref_primary_10_1145_3418215 crossref_primary_10_32604_cmc_2023_028333 crossref_primary_10_3390_app10134652 crossref_primary_10_1007_s11432_021_3489_1 crossref_primary_10_1016_j_patcog_2021_107935 crossref_primary_10_1109_LSP_2019_2947185 crossref_primary_10_1109_TIP_2022_3184813 crossref_primary_10_1016_j_neucom_2025_129688 crossref_primary_10_3390_rs13030475 crossref_primary_10_1002_int_22938 crossref_primary_10_1109_TIP_2020_3015543 crossref_primary_10_1177_03611981211019743 crossref_primary_10_3390_s22062188 crossref_primary_10_1109_TPAMI_2018_2885764 crossref_primary_10_1093_sysbio_syz014 crossref_primary_10_1007_s00138_022_01349_z crossref_primary_10_1016_j_imavis_2019_10_006 crossref_primary_10_1109_TIFS_2019_2922331 crossref_primary_10_1016_j_jmapro_2024_07_003 crossref_primary_10_1109_TIFS_2020_2994738 crossref_primary_10_7717_peerj_cs_1116 crossref_primary_10_1016_j_compag_2023_108244 crossref_primary_10_1007_s11042_020_10491_7 crossref_primary_10_1109_TPAMI_2024_3408913 crossref_primary_10_1109_TPAMI_2020_2999099 crossref_primary_10_1590_fst_104322 crossref_primary_10_1109_TCSVT_2023_3265751 crossref_primary_10_1016_j_ins_2021_06_002 crossref_primary_10_1016_j_neucom_2022_08_031 crossref_primary_10_1016_j_imavis_2024_104925 crossref_primary_10_1155_2019_9794202 crossref_primary_10_1109_TNNLS_2022_3202534 crossref_primary_10_1007_s00521_022_07873_3 crossref_primary_10_1109_TCSVT_2022_3197844 crossref_primary_10_1016_j_jvcir_2019_05_017 crossref_primary_10_1016_j_neunet_2020_03_015 crossref_primary_10_1109_TNNLS_2024_3363163 crossref_primary_10_1109_ACCESS_2022_3183224 crossref_primary_10_1088_1742_6596_1237_3_032077 crossref_primary_10_1155_2022_5816565 crossref_primary_10_1016_j_sigpro_2020_107519 crossref_primary_10_1109_TII_2023_3308771 crossref_primary_10_1016_j_patcog_2022_108792 crossref_primary_10_1007_s10115_022_01669_6 crossref_primary_10_1007_s11042_022_12348_7 crossref_primary_10_1016_j_future_2017_11_002 crossref_primary_10_3233_AIC_220187 crossref_primary_10_1007_s11036_022_01924_8 crossref_primary_10_1109_TMM_2020_2993960 crossref_primary_10_1016_j_patcog_2021_108159 crossref_primary_10_3390_e24121755 crossref_primary_10_3390_electronics12102193 crossref_primary_10_1016_j_asoc_2022_108622 crossref_primary_10_1016_j_jvcir_2022_103592 crossref_primary_10_1145_3510004 crossref_primary_10_1587_transinf_2019EDL8119 crossref_primary_10_20965_jaciii_2023_p0182 crossref_primary_10_1016_j_image_2022_116885 crossref_primary_10_1109_JBHI_2022_3233535 crossref_primary_10_1088_1742_6596_1345_3_032098 crossref_primary_10_1088_1361_6501_ad8592 crossref_primary_10_1109_TPAMI_2019_2932058 crossref_primary_10_1109_ACCESS_2019_2927230 crossref_primary_10_1007_s11760_023_02889_1 crossref_primary_10_1109_ACCESS_2023_3287630 crossref_primary_10_1109_TCSVT_2020_3033165 crossref_primary_10_3390_ani13020264 |
Cites_doi | 10.1145/2733373.2807412 10.1109/CVPR.2015.7298965 10.1109/TMM.2015.2408566 10.1109/ICCV.2015.296 10.1109/CVPR.2012.6248092 10.1109/TPAMI.2013.50 10.1109/CVPR.2015.7298775 10.1109/CVPR.2015.7298724 10.1109/CVPR.2015.7299007 10.1109/CVPR.2010.5540039 10.1109/ICCVW.2013.77 10.1109/ICVGIP.2008.47 10.1109/TIP.2015.2493446 10.1007/s11263-013-0636-x 10.1109/TIP.2016.2545300 10.1109/TIP.2015.2497145 10.1109/ICCV.2015.19 10.1109/CVPR.2014.180 10.1109/ICCV.2015.170 10.1109/ICCV.2015.136 10.1109/CVPR.2007.383172 10.1109/ICCVW.2015.45 10.1126/science.3749885 10.1109/CVPR.2015.7298642 10.1007/s11704-016-6906-3 10.1109/TIP.2016.2531289 |
ContentType | Journal Article |
DBID | 97E RIA RIE AAYXX CITATION NPM 7X8 |
DOI | 10.1109/TIP.2017.2688133 |
DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005-present IEEE All-Society Periodicals Package (ASPP) 1998-Present IEEE Electronic Library (IEL) CrossRef PubMed MEDLINE - Academic |
DatabaseTitle | CrossRef PubMed MEDLINE - Academic |
DatabaseTitleList | MEDLINE - Academic PubMed |
Database_xml | – sequence: 1 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: RIE name: IEEE/IET Electronic Library url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Applied Sciences Engineering |
EISSN | 1941-0042 |
EndPage | 2881 |
ExternalDocumentID | 28368819 10_1109_TIP_2017_2688133 7887720 |
Genre | orig-research Journal Article |
GrantInformation_xml | – fundername: National Natural Science Foundation of China grantid: 61422203; 61333014 funderid: 10.13039/501100001809 |
GroupedDBID | --- -~X .DC 0R~ 29I 4.4 53G 5GY 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABFSI ABQJQ ABVLG ACGFO ACGFS ACIWK AENEX AETIX AGQYO AGSQL AHBIQ AI. AIBXA AKJIK AKQYR ALLEH ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 E.L EBS EJD F5P HZ~ H~9 ICLAB IFIPE IFJZH IPLJI JAVBF LAI M43 MS~ O9- OCL P2P RIA RIE RNS TAE TN5 VH1 AAYOK AAYXX CITATION RIG NPM 7X8 |
ID | FETCH-LOGICAL-c319t-7464169efd5b89cf509001afb449c0e1d970170ca4a7b769c01beb1ccb1b10083 |
IEDL.DBID | RIE |
ISSN | 1057-7149 1941-0042 |
IngestDate | Fri Jul 11 16:22:50 EDT 2025 Thu Apr 03 07:05:31 EDT 2025 Thu Apr 24 22:59:40 EDT 2025 Tue Jul 01 02:03:14 EDT 2025 Tue Aug 26 17:00:45 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 6 |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c319t-7464169efd5b89cf509001afb449c0e1d970170ca4a7b769c01beb1ccb1b10083 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
ORCID | 0000-0002-2085-7568 |
PMID | 28368819 |
PQID | 1884167489 |
PQPubID | 23479 |
PageCount | 14 |
ParticipantIDs | proquest_miscellaneous_1884167489 crossref_primary_10_1109_TIP_2017_2688133 pubmed_primary_28368819 crossref_citationtrail_10_1109_TIP_2017_2688133 ieee_primary_7887720 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2017-June 2017-6-00 2017-Jun 20170601 |
PublicationDateYYYYMMDD | 2017-06-01 |
PublicationDate_xml | – month: 06 year: 2017 text: 2017-June |
PublicationDecade | 2010 |
PublicationPlace | United States |
PublicationPlace_xml | – name: United States |
PublicationTitle | IEEE transactions on image processing |
PublicationTitleAbbrev | TIP |
PublicationTitleAlternate | IEEE Trans Image Process |
PublicationYear | 2017 |
Publisher | IEEE |
Publisher_xml | – name: IEEE |
References | babenko (ref22) 2014; 8689 azizpour (ref38) 2012; 7572 ref12 ref37 ref15 ref36 ref14 jégou (ref17) 2008; 5302 yandex (ref23) 2015 ref30 ref33 jaderberg (ref31) 2015 ref11 ref32 ref2 ref39 ref19 ref18 tolias (ref27) 2016 xiao (ref6) 2015 wah (ref10) 2011 simonyan (ref34) 2015 zhang (ref4) 2014; 8689 ref46 ref24 gong (ref21) 2014; 8695 razavian (ref20) 2015 ref42 ref41 hinton (ref35) 1986 ref43 maji (ref16) 2013 ref28 khosla (ref13) 2011 ref29 ref8 ref7 ref9 ref3 zheng (ref26) 2016 ref5 krizhevsky (ref1) 2012 kalantidis (ref25) 2015 ref40 rodner (ref45) 2015 fan (ref44) 2008; 9 |
References_xml | – volume: 9 start-page: 1871 year: 2008 ident: ref44 article-title: LIBLINEAR: A library for large linear classification publication-title: J Mach Learn Res – ident: ref43 doi: 10.1145/2733373.2807412 – ident: ref42 doi: 10.1109/CVPR.2015.7298965 – ident: ref33 doi: 10.1109/TMM.2015.2408566 – ident: ref3 doi: 10.1109/ICCV.2015.296 – ident: ref15 doi: 10.1109/CVPR.2012.6248092 – ident: ref36 doi: 10.1109/TPAMI.2013.50 – ident: ref5 doi: 10.1109/CVPR.2015.7298775 – year: 2016 ident: ref26 article-title: Good practice in CNN feature transfer – volume: 8689 start-page: 834 year: 2014 ident: ref4 article-title: Part-based R-CNNs for fine-grained category detection publication-title: Proc Eur Conf Comput Vis – start-page: 1 year: 2015 ident: ref34 article-title: Very deep convolutional networks for large-scale image recognition publication-title: Proc Int Conf Learn Represent – ident: ref39 doi: 10.1109/CVPR.2015.7298724 – ident: ref2 doi: 10.1109/CVPR.2015.7299007 – ident: ref18 doi: 10.1109/CVPR.2010.5540039 – ident: ref11 doi: 10.1109/ICCVW.2013.77 – ident: ref14 doi: 10.1109/ICVGIP.2008.47 – ident: ref30 doi: 10.1109/TIP.2015.2493446 – start-page: 1 year: 1986 ident: ref35 article-title: Learning distributed representations of concepts publication-title: Proc 8th Annu Conf Cognit Sci Soc – ident: ref19 doi: 10.1007/s11263-013-0636-x – volume: 8689 start-page: 584 year: 2014 ident: ref22 article-title: Neural codes for image retrieval publication-title: Proc Eur Conf Comput Vis – volume: 7572 start-page: 836 year: 2012 ident: ref38 article-title: Object detection using strongly-supervised deformable part models publication-title: Proc Eur Conf Comput Vis – start-page: 1097 year: 2012 ident: ref1 article-title: ImageNet classification with deep convolutional neural networks publication-title: Proc Adv Neural Inf Process Syst – year: 2013 ident: ref16 article-title: Fine-grained visual classification of aircraft – start-page: 1 year: 2016 ident: ref27 article-title: Particular object retrieval with integral max-pooling of CNN activations publication-title: Proc Int Conf Learn Represent – ident: ref28 doi: 10.1109/TIP.2016.2545300 – start-page: 806 year: 2011 ident: ref13 article-title: Novel dataset for fine-grained image categorization publication-title: Proc IEEE Conf Comput Vis Pattern Recognit Workshop Fine-Grained Vis Categorization – ident: ref29 doi: 10.1109/TIP.2015.2497145 – ident: ref24 doi: 10.1109/ICCV.2015.19 – ident: ref32 doi: 10.1109/CVPR.2014.180 – ident: ref8 doi: 10.1109/ICCV.2015.170 – ident: ref7 doi: 10.1109/ICCV.2015.136 – ident: ref12 doi: 10.1109/CVPR.2007.383172 – start-page: 1269 year: 2015 ident: ref23 article-title: Aggregating deep convolutional features for image retrieval publication-title: Proc IEEE Int Conf Comput Vis – ident: ref40 doi: 10.1109/ICCVW.2015.45 – volume: 5302 start-page: 304 year: 2008 ident: ref17 article-title: Hamming embedding and weak geometric consistency for large scale image search publication-title: Proc Eur Conf Comput Vis – ident: ref37 doi: 10.1126/science.3749885 – ident: ref41 doi: 10.1109/CVPR.2015.7298642 – start-page: 2008 year: 2015 ident: ref31 article-title: Spatial transformer networks publication-title: Proc Adv Neural Inf Process Syst – year: 2015 ident: ref25 article-title: Crossdimensional weighting for aggregated deep convolutional features – start-page: 806 year: 2015 ident: ref20 article-title: CNN features off-the-shelf: An astounding baseline for recognition publication-title: Proc IEEE Conf Comput Vis Pattern Recog Workshop Deep Vis – start-page: 1 year: 2015 ident: ref45 article-title: Fine-grained recognition datasets for biodiversity analysis publication-title: Proc IEEE Conf Comput Vis Pattern Recognit Workshop – ident: ref46 doi: 10.1007/s11704-016-6906-3 – year: 2011 ident: ref10 article-title: The Caltech-UCSD birds-200-2011 dataset – start-page: 842 year: 2015 ident: ref6 article-title: The application of two-level attention models in deep convolutional neural network for fine-grained image classification publication-title: Proc IEEE Conf Comput Vis Pattern Recognit – ident: ref9 doi: 10.1109/TIP.2016.2531289 – volume: 8695 start-page: 392 year: 2014 ident: ref21 article-title: Multi-scale orderless pooling of deep convolutional activation features publication-title: Proc Eur Conf Comput Vis |
SSID | ssj0014516 |
Score | 2.6593795 |
Snippet | Deep convolutional neural network models pre-trained for the ImageNet classification task have been successfully adopted to tasks in other domains, such as... |
SourceID | proquest pubmed crossref ieee |
SourceType | Aggregation Database Index Database Enrichment Source Publisher |
StartPage | 2868 |
SubjectTerms | Automobiles Birds Buildings Convolution Dogs Fine-grained image retrieval Image retrieval Machine learning selection and aggregation unsupervised object localization |
Title | Selective Convolutional Descriptor Aggregation for Fine-Grained Image Retrieval |
URI | https://ieeexplore.ieee.org/document/7887720 https://www.ncbi.nlm.nih.gov/pubmed/28368819 https://www.proquest.com/docview/1884167489 |
Volume | 26 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3JTsMwEB1BT3BgX8qmIHFBwm3SOHF8RIiySCxikbhF8dIegATRlgNfz4yzCBAgblbkcZY3tmcy4zcAe3Fok8DPOEOHljOeZBGTuE0xdIZMLzBCRyGdd764jE_v-flD9DAFB81ZGGutSz6zHWq6WL4p9IR-lXUp80300EGfxlZ5VquJGFDBWRfZjAQTaPbXIUlfdu_OrimHS3R6cZKgT0YEwElIbfllN3LlVX63NN2O05-Hi_pZy0STx85krDr6_RuN439fZgHmKtPTOyx1ZRGmbL4E85UZ6lWTfLQEs584Cpfh6tZVysFF0Tsq8rdKU3Ec9FjdilO8eodD9NqHDmMPjWCvj-LshIpP4MBnz7hkeTeucheq9Qrc94_vjk5ZVYWBaZyeYyZ4jEabtAMTqUTqAVoY-LmzgeJcat8GRgri4NEZz4QSMV4LFG4AWqtAEXNQuAqtvMjtOnhhPDDWRKEWWnIje0oQnZrMjJYo6ts2dGs0Ul1RlFOljKfUuSq-TBHKlKBMKyjbsN9IvJT0HH_0XSYUmn4VAG3YrQFPcWpRvCTLbTEZpUFCMVmi52nDWqkJjXCtQBs_D7oJM3TrMqdsC1rj14ndRutlrHac2n4A6ZLn2g |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LT-MwEB4hOACHZXluYReCxAUJt0njxPERoS0tUEBskbhF8aMcgARBu4f99cw4DwFiETcrsi0nM_Z8kxl_A7AXhzYJ_IwzdGg540kWMYlmiqEzZLqBEToK6b7z8DzuX_OTm-hmBg6auzDWWpd8ZtvUdLF8U-gp_SrrUOab6KKDPod2PwrK21pNzIBKzrrYZiSYQOBfByV92RkNLimLS7S7cZKgV0YUwElIbfnGHrkCK__Hms7m9JZgWK-2TDW5a08nqq3_vSNy_OrrfIdvFfj0DkttWYYZm6_AUgVEvWqbP6_A4iuWwlW4-ONq5eCx6B0V-d9KV3Ee9FndmVM8eYe36LffOil7CIO9Hg5nx1R-AicePOCh5V252l2o2Gtw3fs9Ouqzqg4D07hBJ0zwGGGbtGMTqUTqMWIM_NzZWHEutW8DIwWx8OiMZ0KJGJ8FCk2A1ipQxB0UrsNsXuT2B3hhPDbWRKEWWnIju0oQoZrMjJY41Lct6NTSSHVFUk61Mu5T56z4MkVRpiTKtBJlC_abEY8lQccnfVdJCk2_SgAt2K0FnuLmoohJltti-pwGCUVliaCnBRulJjSDawXa_HjSHZjvj4Zn6dng_HQLFmgZZYbZT5idPE3tL8QyE7XtVPgFKebrIw |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Selective+Convolutional+Descriptor+Aggregation+for+Fine-Grained+Image+Retrieval&rft.jtitle=IEEE+transactions+on+image+processing&rft.au=Xiu-Shen+Wei&rft.au=Jian-Hao+Luo&rft.au=Jianxin+Wu&rft.au=Zhi-Hua+Zhou&rft.date=2017-06-01&rft.pub=IEEE&rft.issn=1057-7149&rft.volume=26&rft.issue=6&rft.spage=2868&rft.epage=2881&rft_id=info:doi/10.1109%2FTIP.2017.2688133&rft_id=info%3Apmid%2F28368819&rft.externalDocID=7887720 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1057-7149&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1057-7149&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1057-7149&client=summon |