Embedding Adversarial Learning for Vehicle Re-Identification
The high similarities of different real-world vehicles and great diversities of the acquisition views pose grand challenges to vehicle re-identification (ReID), which traditionally maps the vehicle images into a high-dimensional embedding space for distance optimization, vehicle discrimination, and...
Saved in:
Published in | IEEE transactions on image processing Vol. 28; no. 8; pp. 3794 - 3807 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
United States
IEEE
01.08.2019
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | The high similarities of different real-world vehicles and great diversities of the acquisition views pose grand challenges to vehicle re-identification (ReID), which traditionally maps the vehicle images into a high-dimensional embedding space for distance optimization, vehicle discrimination, and identification. To improve the discriminative capability and robustness of the ReID algorithm, we propose a novel end-to-end embedding adversarial learning network (EALN) that is capable of generating samples localized in the embedding space. Instead of selecting abundant hard negatives from the training set, which is extremely difficult if not impossible, with our embedding adversarial learning scheme, the automatically generated hard negative samples in the specified embedding space can greatly improve the capability of the network for discriminating similar vehicles. Moreover, the more challenging cross-view vehicle ReID problem, which requires the ReID algorithm to be robust with different query views, can also benefit from such a scheme based on the artificially generated cross-view samples. We demonstrate the promise of EALN through extensive experiments and show the effectiveness of hard negative and cross-view generation in facilitating vehicle ReID based on the comparisons with the state-of-the-art schemes. |
---|---|
AbstractList | The high similarities of different real-world vehicles and great diversities of the acquisition views pose grand challenges to vehicle re-identification (ReID), which traditionally maps the vehicle images into a high-dimensional embedding space for distance optimization, vehicle discrimination, and identification. To improve the discriminative capability and robustness of the ReID algorithm, we propose a novel end-to-end embedding adversarial learning network (EALN) that is capable of generating samples localized in the embedding space. Instead of selecting abundant hard negatives from the training set, which is extremely difficult if not impossible, with our embedding adversarial learning scheme, the automatically generated hard negative samples in the specified embedding space can greatly improve the capability of the network for discriminating similar vehicles. Moreover, the more challenging cross-view vehicle ReID problem, which requires the ReID algorithm to be robust with different query views, can also benefit from such a scheme based on the artificially generated cross-view samples. We demonstrate the promise of EALN through extensive experiments and show the effectiveness of hard negative and cross-view generation in facilitating vehicle ReID based on the comparisons with the state-of-the-art schemes. The high similarities of different real-world vehicles and great diversities of the acquisition views pose grand challenges to vehicle re-identification (ReID), which traditionally maps the vehicle images into a high-dimensional embedding space for distance optimization, vehicle discrimination, and identification. To improve the discriminative capability and robustness of the ReID algorithm, we propose a novel end-to-end embedding adversarial learning network (EALN) that is capable of generating samples localized in the embedding space. Instead of selecting abundant hard negatives from the training set, which is extremely difficult if not impossible, with our embedding adversarial learning scheme, the automatically generated hard negative samples in the specified embedding space can greatly improve the capability of the network for discriminating similar vehicles. Moreover, the more challenging cross-view vehicle ReID problem, which requires the ReID algorithm to be robust with different query views, can also benefit from such a scheme based on the artificially generated cross-view samples. We demonstrate the promise of EALN through extensive experiments and show the effectiveness of hard negative and cross-view generation in facilitating vehicle ReID based on the comparisons with the state-of-the-art schemes.The high similarities of different real-world vehicles and great diversities of the acquisition views pose grand challenges to vehicle re-identification (ReID), which traditionally maps the vehicle images into a high-dimensional embedding space for distance optimization, vehicle discrimination, and identification. To improve the discriminative capability and robustness of the ReID algorithm, we propose a novel end-to-end embedding adversarial learning network (EALN) that is capable of generating samples localized in the embedding space. Instead of selecting abundant hard negatives from the training set, which is extremely difficult if not impossible, with our embedding adversarial learning scheme, the automatically generated hard negative samples in the specified embedding space can greatly improve the capability of the network for discriminating similar vehicles. Moreover, the more challenging cross-view vehicle ReID problem, which requires the ReID algorithm to be robust with different query views, can also benefit from such a scheme based on the artificially generated cross-view samples. We demonstrate the promise of EALN through extensive experiments and show the effectiveness of hard negative and cross-view generation in facilitating vehicle ReID based on the comparisons with the state-of-the-art schemes. |
Author | Duan, Ling-Yu Wang, Shiqi Bai, Yan Liu, Jun Lou, Yihang |
Author_xml | – sequence: 1 givenname: Yihang surname: Lou fullname: Lou, Yihang email: yihanglou@pku.edu.cn organization: National Engineering Laboratory for Video Technology, Peking University, Beijing, China – sequence: 2 givenname: Yan surname: Bai fullname: Bai, Yan email: yanbai@pku.edu.cn organization: National Engineering Laboratory for Video Technology, Peking University, Beijing, China – sequence: 3 givenname: Jun orcidid: 0000-0002-4365-4165 surname: Liu fullname: Liu, Jun email: jliu029@ntu.edu.sg organization: School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore – sequence: 4 givenname: Shiqi orcidid: 0000-0002-3583-959X surname: Wang fullname: Wang, Shiqi email: shiqwang@cityu.edu.hk organization: Department of Computer Science, City University of Hong Kong, Hong Kong – sequence: 5 givenname: Ling-Yu orcidid: 0000-0002-4491-2023 surname: Duan fullname: Duan, Ling-Yu email: lingyu@pku.edu.cn organization: National Engineering Laboratory for Video Technology, Peking University, Beijing, China |
BackLink | https://www.ncbi.nlm.nih.gov/pubmed/30835224$$D View this record in MEDLINE/PubMed |
BookMark | eNp9kMtLAzEQh4MoWh93QZCCFy9bJ8lmNwEvIj4KBUXUa8hmJ5qyj5rsCv73bm3twYOnGYbvNzN8-2S7aRsk5JjChFJQF8_TxwkDqiZMAaOUbZERVSlNAFK2PfQg8iSnqdoj-zHOAWgqaLZL9jhILhhLR-Typi6wLH3zNr4qPzFEE7ypxjM0oVkOXRvGr_jubYXjJ0ymJTadd96azrfNIdlxpop4tK4H5OX25vn6Ppk93E2vr2aJ5WneJVIUlhescOCMNRw5gi25LYanwaSWS2ROZakxUkjrpCoK44CVZW6NkopyfkDOV3sXof3oMXa69tFiVZkG2z5qRqVkFHIhBvTsDzpv-9AM32nGuMok5TIfqNM11Rc1lnoRfG3Cl_71MgCwAmxoYwzoNggFvVSvB_V6qV6v1Q-R7E_E-u5HUxeMr_4LnqyCHhE3d2QmuBSMfwOs0I7O |
CODEN | IIPRE4 |
CitedBy_id | crossref_primary_10_1007_s11227_025_07012_4 crossref_primary_10_1007_s11042_023_14839_7 crossref_primary_10_1109_TIP_2020_2989100 crossref_primary_10_1109_TITS_2023_3277974 crossref_primary_10_1007_s12652_021_03400_9 crossref_primary_10_1007_s12559_023_10162_3 crossref_primary_10_1016_j_patcog_2023_109304 crossref_primary_10_1145_3447866 crossref_primary_10_1145_3578578 crossref_primary_10_1109_TIP_2020_2980130 crossref_primary_10_1007_s11760_024_03711_2 crossref_primary_10_1109_TETCI_2024_3372391 crossref_primary_10_14801_jkiit_2023_21_5_29 crossref_primary_10_1109_TITS_2023_3234644 crossref_primary_10_1016_j_neucom_2024_128745 crossref_primary_10_1007_s11042_020_09987_z crossref_primary_10_1016_j_neunet_2023_10_032 crossref_primary_10_32604_cmc_2021_016560 crossref_primary_10_1109_TMM_2020_2966885 crossref_primary_10_32604_cmc_2021_021627 crossref_primary_10_1016_j_neucom_2021_07_082 crossref_primary_10_1109_TETCI_2021_3127906 crossref_primary_10_1007_s11432_021_3383_y crossref_primary_10_1109_MMUL_2020_2999464 crossref_primary_10_1109_TITS_2023_3285758 crossref_primary_10_1007_s10489_022_03801_z crossref_primary_10_1109_TITS_2023_3308138 crossref_primary_10_1016_j_patcog_2025_111453 crossref_primary_10_1109_TMM_2021_3104141 crossref_primary_10_1109_TMM_2022_3154102 crossref_primary_10_1049_ipr2_12582 crossref_primary_10_1109_JIOT_2024_3402071 crossref_primary_10_1109_TIP_2023_3326691 crossref_primary_10_1007_s13042_023_01993_5 crossref_primary_10_1371_journal_pone_0291047 crossref_primary_10_1007_s00530_023_01077_y crossref_primary_10_1016_j_image_2023_116922 crossref_primary_10_1109_ACCESS_2020_3036185 crossref_primary_10_1109_TITS_2021_3130403 crossref_primary_10_1109_TVT_2023_3262983 crossref_primary_10_1007_s42979_024_03271_9 crossref_primary_10_1016_j_jvcir_2023_103937 crossref_primary_10_1145_3322122 crossref_primary_10_3390_math9243162 crossref_primary_10_1016_j_imavis_2024_104972 crossref_primary_10_1109_TITS_2022_3190959 crossref_primary_10_3390_electronics9071083 crossref_primary_10_1016_j_geits_2025_100269 crossref_primary_10_1016_j_trc_2022_103982 crossref_primary_10_1007_s10489_020_02171_8 crossref_primary_10_1186_s13634_021_00767_x crossref_primary_10_1109_TPAMI_2021_3099253 crossref_primary_10_3390_e25040594 crossref_primary_10_1109_TCSVT_2023_3298788 crossref_primary_10_1109_TNNLS_2020_3029299 crossref_primary_10_3390_s23115152 crossref_primary_10_1109_TITS_2020_3024824 crossref_primary_10_3390_app14114929 crossref_primary_10_1016_j_ins_2021_02_013 crossref_primary_10_1007_s41095_024_0424_2 crossref_primary_10_1109_ACCESS_2022_3150411 crossref_primary_10_1109_TITS_2023_3257873 crossref_primary_10_1109_TIP_2023_3238642 crossref_primary_10_1109_TITS_2020_3030301 crossref_primary_10_1109_TITS_2021_3103961 crossref_primary_10_1016_j_ipm_2022_102868 crossref_primary_10_3390_electronics11091354 crossref_primary_10_1109_TITS_2021_3086142 crossref_primary_10_32604_cmc_2024_058461 crossref_primary_10_1109_TMM_2021_3134839 crossref_primary_10_1038_s41598_024_77973_8 crossref_primary_10_1109_TITS_2022_3166463 crossref_primary_10_1109_TMM_2023_3283054 crossref_primary_10_3390_s24020616 crossref_primary_10_1016_j_cie_2023_109619 crossref_primary_10_1007_s11263_023_01873_z crossref_primary_10_1016_j_engappai_2024_109568 crossref_primary_10_1590_1678_4324_2021210296 crossref_primary_10_1109_ACCESS_2021_3097964 crossref_primary_10_3390_electronics11101617 crossref_primary_10_1016_j_inffus_2023_101901 crossref_primary_10_1109_TIM_2023_3285978 crossref_primary_10_1109_TITS_2024_3367723 crossref_primary_10_1109_TIV_2023_3292513 crossref_primary_10_1038_s41598_024_82755_3 crossref_primary_10_1109_TIP_2022_3202370 crossref_primary_10_1145_3474596 crossref_primary_10_1016_j_trc_2021_103067 crossref_primary_10_1093_jcde_qwad014 crossref_primary_10_1109_ACCESS_2019_2956172 crossref_primary_10_1109_JIOT_2020_3015239 crossref_primary_10_1007_s11042_024_18520_5 |
Cites_doi | 10.1109/TMM.2018.2796240 10.1109/TIP.2017.2675201 10.1109/CVPR.2007.383172 10.1109/CVPR.2018.00110 10.1109/TIP.2018.2819820 10.1109/CVPR.2017.19 10.1007/978-3-319-46475-6_43 10.1109/ICCV.2017.49 10.1016/j.patcog.2016.11.018 10.1109/CVPRW.2016.195 10.1007/978-3-319-46448-0_1 10.1109/CVPR.2015.7299023 10.1109/TITS.2016.2639020 10.1109/ICCV.2017.629 10.1109/ICCV.2017.244 10.1109/TMM.2011.2170666 10.1007/978-3-319-46493-0_47 10.1109/CVPR.2018.00016 10.1109/CVPR.2017.632 10.1109/TIP.2018.2818438 10.1109/TIP.2018.2815840 10.1007/978-3-319-46478-7_31 10.1631/jzus.C1300291 10.1109/CVPR.2014.180 10.1109/CVPR.2016.265 10.1007/978-3-319-46475-6_53 10.1109/TIP.2018.2851098 10.1109/CVPR.2015.7298832 10.1109/CVPR.2016.90 10.1109/TPAMI.2017.2764893 10.1109/TIP.2017.2683063 10.1109/ICME.2016.7553002 10.1109/CVPR.2018.00679 10.1109/TIP.2017.2695101 10.1007/978-3-319-46484-8_31 10.1109/CVPR.2016.238 10.1109/ICCV.2017.210 10.1109/CVPR.2016.126 10.1109/MITS.2013.2288648 10.1109/CVPR.2015.7298682 10.1109/TIP.2017.2652725 10.1109/CVPR.2016.89 10.1109/ICCV.2017.310 10.1109/TITS.2015.2496545 10.1109/TIP.2017.2765836 10.5244/C.31.186 10.1109/ICCV.2017.405 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019 |
DBID | 97E RIA RIE AAYXX CITATION NPM 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
DOI | 10.1109/TIP.2019.2902112 |
DatabaseName | IEEE Xplore (IEEE) IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef PubMed Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional MEDLINE - Academic |
DatabaseTitle | CrossRef PubMed Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional MEDLINE - Academic |
DatabaseTitleList | MEDLINE - Academic PubMed Technology Research Database |
Database_xml | – sequence: 1 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: RIE name: IEEE Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Applied Sciences Engineering |
EISSN | 1941-0042 |
EndPage | 3807 |
ExternalDocumentID | 30835224 10_1109_TIP_2019_2902112 8653852 |
Genre | orig-research Journal Article |
GrantInformation_xml | – fundername: NRF-NSFC grantid: NRF2016NRF-NSFC001-098 – fundername: National Basic Research Program of China (973 Program); National Key Research and Development Program of China grantid: 2016YFB1001501 funderid: 10.13039/501100012166 – fundername: National Research Foundation Singapore funderid: 10.13039/501100001381 – fundername: National Natural Science Foundation of China grantid: U1611461; 61661146005 funderid: 10.13039/501100001809 – fundername: Shenzhen Municipal Science and Technology Program grantid: JCYJ20170818141146428 |
GroupedDBID | --- -~X .DC 0R~ 29I 4.4 53G 5GY 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABFSI ABQJQ ABVLG ACGFO ACGFS ACIWK AENEX AETIX AGQYO AGSQL AHBIQ AI. AIBXA AKJIK AKQYR ALLEH ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 E.L EBS EJD F5P HZ~ H~9 ICLAB IFIPE IFJZH IPLJI JAVBF LAI M43 MS~ O9- OCL P2P RIA RIE RNS TAE TN5 VH1 AAYOK AAYXX CITATION RIG NPM PKN Z5M 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
ID | FETCH-LOGICAL-c347t-85bc3b2bf0faca3e3e0cd3cb2900a4c38e2f964aa858cf89bbaf02dd7ca989133 |
IEDL.DBID | RIE |
ISSN | 1057-7149 1941-0042 |
IngestDate | Fri Jul 11 07:19:11 EDT 2025 Mon Jun 30 10:25:02 EDT 2025 Wed Feb 19 02:34:10 EST 2025 Tue Jul 01 02:03:19 EDT 2025 Thu Apr 24 23:08:25 EDT 2025 Wed Aug 27 08:32:11 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 8 |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c347t-85bc3b2bf0faca3e3e0cd3cb2900a4c38e2f964aa858cf89bbaf02dd7ca989133 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
ORCID | 0000-0002-4491-2023 0000-0002-4365-4165 0000-0002-3583-959X |
PMID | 30835224 |
PQID | 2239681387 |
PQPubID | 85429 |
PageCount | 14 |
ParticipantIDs | proquest_miscellaneous_2188210755 proquest_journals_2239681387 ieee_primary_8653852 crossref_primary_10_1109_TIP_2019_2902112 pubmed_primary_30835224 crossref_citationtrail_10_1109_TIP_2019_2902112 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2019-08-01 |
PublicationDateYYYYMMDD | 2019-08-01 |
PublicationDate_xml | – month: 08 year: 2019 text: 2019-08-01 day: 01 |
PublicationDecade | 2010 |
PublicationPlace | United States |
PublicationPlace_xml | – name: United States – name: New York |
PublicationTitle | IEEE transactions on image processing |
PublicationTitleAbbrev | TIP |
PublicationTitleAlternate | IEEE Trans Image Process |
PublicationYear | 2019 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | ref57 ref13 ref12 ref59 ref15 ref58 ref14 ref53 ref52 ref55 ref11 ref54 zhang (ref29) 2015 yuan (ref10) 2016 ref17 ref18 ref51 ref50 ref46 ref48 ref41 ref8 ref7 ref9 ref4 ref3 ref6 ref5 choi (ref49) 2017 arjovsky (ref44) 2017 liu (ref40) 2016 ma (ref19) 2017 ref37 chen (ref43) 2016 ref36 ref31 ref30 ref33 ref32 ref2 goodfellow (ref16) 2014 ref1 ref39 ref38 denton (ref45) 2015 hermans (ref61) 2017 ref24 ref23 ref26 ref25 ref20 ref22 zhao (ref42) 2016 ref21 radford (ref47) 2015 ref28 ref27 mirza (ref35) 2014 ref60 li (ref34) 2018 ref62 ulyanov (ref56) 2016 |
References_xml | – ident: ref28 doi: 10.1109/TMM.2018.2796240 – ident: ref24 doi: 10.1109/TIP.2017.2675201 – ident: ref62 doi: 10.1109/CVPR.2007.383172 – ident: ref50 doi: 10.1109/CVPR.2018.00110 – start-page: 2672 year: 2014 ident: ref16 article-title: Generative adversarial nets publication-title: Proc Adv Neural Inf Process Syst – year: 2017 ident: ref61 publication-title: Defense of the Triplet Loss for Person Re-Identification – ident: ref31 doi: 10.1109/TIP.2018.2819820 – ident: ref38 doi: 10.1109/CVPR.2017.19 – ident: ref54 doi: 10.1007/978-3-319-46475-6_43 – ident: ref60 doi: 10.1109/ICCV.2017.49 – year: 2016 ident: ref56 publication-title: Instance normalization The missing ingredient for fast stylization – ident: ref18 doi: 10.1016/j.patcog.2016.11.018 – ident: ref9 doi: 10.1109/CVPRW.2016.195 – start-page: 2172 year: 2016 ident: ref43 article-title: InfoGAN: Interpretable representation learning by information maximizing generative adversarial nets publication-title: Proc Adv Neural Inf Process Syst – ident: ref14 doi: 10.1007/978-3-319-46448-0_1 – ident: ref8 doi: 10.1109/CVPR.2015.7299023 – ident: ref1 doi: 10.1109/TITS.2016.2639020 – ident: ref46 doi: 10.1109/ICCV.2017.629 – ident: ref48 doi: 10.1109/ICCV.2017.244 – ident: ref27 doi: 10.1109/TMM.2011.2170666 – ident: ref37 doi: 10.1007/978-3-319-46493-0_47 – ident: ref17 doi: 10.1109/CVPR.2018.00016 – ident: ref57 doi: 10.1109/CVPR.2017.632 – ident: ref21 doi: 10.1109/TIP.2018.2818438 – ident: ref26 doi: 10.1109/TIP.2018.2815840 – year: 2016 ident: ref42 publication-title: Energy-based Generative Adversarial Network – ident: ref52 doi: 10.1007/978-3-319-46478-7_31 – ident: ref12 doi: 10.1631/jzus.C1300291 – start-page: 469 year: 2016 ident: ref40 article-title: Coupled generative adversarial networks publication-title: Proc Adv Neural Inf Process Syst – start-page: 214 year: 2017 ident: ref44 article-title: Wasserstein generative adversarial networks publication-title: Proc Int Conf Mach Learn – ident: ref30 doi: 10.1109/CVPR.2014.180 – ident: ref39 doi: 10.1109/CVPR.2016.265 – ident: ref5 doi: 10.1007/978-3-319-46475-6_53 – ident: ref33 doi: 10.1109/TIP.2018.2851098 – year: 2014 ident: ref35 publication-title: Conditional generative adversarial nets – ident: ref58 doi: 10.1109/CVPR.2015.7298832 – start-page: 1 year: 2018 ident: ref34 article-title: Discriminative semi-coupled projective dictionary learning for low-resolution person re-identification publication-title: Proc AAAI – ident: ref55 doi: 10.1109/CVPR.2016.90 – ident: ref32 doi: 10.1109/TPAMI.2017.2764893 – ident: ref20 doi: 10.1109/TIP.2017.2683063 – ident: ref3 doi: 10.1109/ICME.2016.7553002 – ident: ref4 doi: 10.1109/CVPR.2018.00679 – ident: ref22 doi: 10.1109/TIP.2017.2695101 – ident: ref36 doi: 10.1007/978-3-319-46484-8_31 – year: 2017 ident: ref49 publication-title: StarGAN Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation – ident: ref6 doi: 10.1109/CVPR.2016.238 – ident: ref11 doi: 10.1109/ICCV.2017.210 – ident: ref53 doi: 10.1109/CVPR.2016.126 – year: 2017 ident: ref19 publication-title: Disentangled person image generation – ident: ref7 doi: 10.1109/MITS.2013.2288648 – year: 2015 ident: ref29 publication-title: Embedding label structures for fine-grained feature representation – ident: ref13 doi: 10.1109/CVPR.2015.7298682 – start-page: 1486 year: 2015 ident: ref45 article-title: Deep generative image models using a Laplacian pyramid of adversarial networks publication-title: Proc Adv Neural Inf Process Syst – ident: ref25 doi: 10.1109/TIP.2017.2652725 – ident: ref15 doi: 10.1109/CVPR.2016.89 – ident: ref41 doi: 10.1109/ICCV.2017.310 – year: 2015 ident: ref47 publication-title: Unsupervised Representation learning with deep convolutional generative adversarial networks CoRR – ident: ref2 doi: 10.1109/TITS.2015.2496545 – year: 2016 ident: ref10 publication-title: Hard-aware deeply cascaded embedding – ident: ref23 doi: 10.1109/TIP.2017.2765836 – ident: ref59 doi: 10.5244/C.31.186 – ident: ref51 doi: 10.1109/ICCV.2017.405 |
SSID | ssj0014516 |
Score | 2.608851 |
Snippet | The high similarities of different real-world vehicles and great diversities of the acquisition views pose grand challenges to vehicle re-identification... |
SourceID | proquest pubmed crossref ieee |
SourceType | Aggregation Database Index Database Enrichment Source Publisher |
StartPage | 3794 |
SubjectTerms | Algorithms cross-view Embedding embedding adversarial learning Gallium nitride generative adversarial network Generative adversarial networks hard negatives Image generation Licenses Machine learning Optimization Space vehicles Task analysis Training Vehicle Re-Identification Vehicles |
Title | Embedding Adversarial Learning for Vehicle Re-Identification |
URI | https://ieeexplore.ieee.org/document/8653852 https://www.ncbi.nlm.nih.gov/pubmed/30835224 https://www.proquest.com/docview/2239681387 https://www.proquest.com/docview/2188210755 |
Volume | 28 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjR1da9sw8GjztD00bbKtXtPiwV4Gc6JIlm1BX0pJyQYbY6Sjb0aSpRbaJmNJXvbre2fLZhtr6YsxWJLl-_Dd6b4A3vvc8Kqy1L3M4CX3NtHM5knGPNUr85TvSNEWX7P5Zfr5Sl7twMcuF8Y5VwefuTHd1r78amW3dFQ2KTJkT4k_3F003Jpcrc5jQA1na8-mzJMc1f7WJcnUZPHpG8VwqTFXKNGm1MBGNJpH-pc0qturPK5p1hLnog9f2r02gSa34-3GjO3vf8o4Pvdj9mEvqJ7xWUMrB7DjlgPoBzU0Dky-HsDLP2oUDuF0dm9cRRIurps3rzWRbBzqsl7HqPTGP9wNLRl_d0mT-OvDSeAruLyYLc7nSWi5kFiR5pukkMYKw41nXlstnHDMVsIahBzTqRWF415lqdaFLKwvlDHaM0R2brUih6d4Db3laukOIWaKaW49GuvSpCplOhNcWU6zOHdSRzBpQV_aUI-c2mLclbVdwlSJeCsJb2XAWwQfuhk_m1ocT4wdEsi7cQHaEYxa7JaBWdclakgqK6aiyCN41z1GNiPfiV661RbHTNEUQVNZygjeNFTRrd0S09v_v_MIXtDOmqjBEfQ2v7buGDWZjTmpSfgBaTvsAA |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjR1dT9sw8ITgge1hjLKPMDYyiZdJS-vacRJLe5kmUMsAIVQm3iLbsYfE1iLavvDruUucCCZAe4kixXac-8jd-b4A9nxueFVZ6l5m8JJ7m2hm8yRjnuqVecp3pGiLk2x0nh5eyIsV-Nrlwjjn6uAz16fb2pdfzeySjsoGRYbsKfGHu4ZyXw6bbK3OZ0AtZ2vfpsyTHBX_1inJ1GAyPqUoLtXnCmXakFrYiEb3SB_Io7rBytO6Zi1zDjbguN1tE2py1V8uTN_e_lPI8X8_5zW8Cspn_L2hlk1YcdMebARFNA5sPu_By3tVCrfg2_5f4yqScXHdvnmuiWjjUJn1d4xqb_zLXdKS8ZlLmtRfH84C38D5wf7kxygJTRcSK9J8kRTSWGG48cxrq4UTjtlKWIOQYzq1onDcqyzVupCF9YUyRnuG6M6tVuTyFG9hdTqbuvcQM8U0tx7NdWlSlTKdCa4sp1mcO6kjGLSgL22oSE6NMf6UtWXCVIl4KwlvZcBbBF-6GddNNY5nxm4RyLtxAdoR7LTYLQO7zkvUkVRWDEWRR_C5e4yMRt4TPXWzJY4ZojGCxrKUEbxrqKJbuyWm7cffuQvro8nxUXk0Pvn5AV7QLpsYwh1YXdws3UfUaxbmU03Od-F670k |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Embedding+Adversarial+Learning+for+Vehicle+Re-Identification&rft.jtitle=IEEE+transactions+on+image+processing&rft.au=Lou%2C+Yihang&rft.au=Bai%2C+Yan&rft.au=Liu%2C+Jun&rft.au=Wang%2C+Shiqi&rft.date=2019-08-01&rft.issn=1941-0042&rft.eissn=1941-0042&rft.volume=28&rft.issue=8&rft.spage=3794&rft_id=info:doi/10.1109%2FTIP.2019.2902112&rft.externalDBID=NO_FULL_TEXT |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1057-7149&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1057-7149&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1057-7149&client=summon |