Disentangled Feature Learning Network and a Comprehensive Benchmark for Vehicle Re-Identification

Vehicle Re-Identification (ReID) is of great significance for public security and intelligent transportation. Large and comprehensive datasets are crucial for the development of vehicle ReID in model training and evaluation. However, existing datasets in this field have limitations in many aspects,...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on pattern analysis and machine intelligence Vol. 44; no. 10; pp. 6854 - 6871
Main Authors Bai, Yan, Liu, Jun, Lou, Yihang, Wang, Ce, Duan, Ling-Yu
Format Journal Article
LanguageEnglish
Published New York IEEE 01.10.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
Abstract Vehicle Re-Identification (ReID) is of great significance for public security and intelligent transportation. Large and comprehensive datasets are crucial for the development of vehicle ReID in model training and evaluation. However, existing datasets in this field have limitations in many aspects, including the constrained capture conditions, limited variation of vehicle appearances, and small scale of training and test set, etc. Hence, a new, large, and challenging benchmark for vehicle ReID is urgently needed. In this paper, we propose a large vehicle ReID dataset, called VERI-Wild 2.0, containing 825,042 images. It is captured using a city-scale surveillance camera system, consisting of 274 cameras covering a very large area over 200 <inline-formula><tex-math notation="LaTeX">km^2</tex-math> <mml:math><mml:mrow><mml:mi>k</mml:mi><mml:msup><mml:mi>m</mml:mi><mml:mn>2</mml:mn></mml:msup></mml:mrow></mml:math><inline-graphic xlink:href="duan-ieq1-3099253.gif"/> </inline-formula>. Specifically, the samples in our dataset present very rich appearance diversities thanks to the long time span collecting settings, unconstrained capturing viewpoints, various illumination conditions, diversified background environments, and different weather conditions. Furthermore, to facilitate more practical benchmarking, we define a challenging and large test set containing about 400K vehicle images that do not have any camera overlap with the training set. VERI-Wild 2.0 is expected to be able to facilitate the design, adaptation, development, and evaluation of different types of learning models for vehicle ReID. Besides, we also design a new method for vehicle ReID. We observe that orientation is a crucial factor for feature matching in vehicle ReID. To match vehicle pairs captured from similar orientations, the learned features are expected to capture specific detailed differential information for discriminating the visually similar yet different vehicles. In contrast, features are desired to capture the orientation invariant common information when matching samples captured from different orientations. Thus a novel disentangled feature learning network (DFNet) is proposed. It explicitly considers the orientation information for vehicle ReID, and concurrently learns the orientation specific and orientation common features that thus can be adaptively exploited via an adaptive matching scheme when dealing with matching pairs from similar or different orientations. The comprehensive experimental results show the effectiveness of our proposed method.
AbstractList Vehicle Re-Identification (ReID) is of great significance for public security and intelligent transportation. Large and comprehensive datasets are crucial for the development of vehicle ReID in model training and evaluation. However, existing datasets in this field have limitations in many aspects, including the constrained capture conditions, limited variation of vehicle appearances, and small scale of training and test set, etc. Hence, a new, large, and challenging benchmark for vehicle ReID is urgently needed. In this paper, we propose a large vehicle ReID dataset, called VERI-Wild 2.0, containing 825,042 images. It is captured using a city-scale surveillance camera system, consisting of 274 cameras covering a very large area over 200 [Formula Omitted]. Specifically, the samples in our dataset present very rich appearance diversities thanks to the long time span collecting settings, unconstrained capturing viewpoints, various illumination conditions, diversified background environments, and different weather conditions. Furthermore, to facilitate more practical benchmarking, we define a challenging and large test set containing about 400K vehicle images that do not have any camera overlap with the training set. VERI-Wild 2.0 is expected to be able to facilitate the design, adaptation, development, and evaluation of different types of learning models for vehicle ReID. Besides, we also design a new method for vehicle ReID. We observe that orientation is a crucial factor for feature matching in vehicle ReID. To match vehicle pairs captured from similar orientations, the learned features are expected to capture specific detailed differential information for discriminating the visually similar yet different vehicles. In contrast, features are desired to capture the orientation invariant common information when matching samples captured from different orientations. Thus a novel disentangled feature learning network (DFNet) is proposed. It explicitly considers the orientation information for vehicle ReID, and concurrently learns the orientation specific and orientation common features that thus can be adaptively exploited via an adaptive matching scheme when dealing with matching pairs from similar or different orientations. The comprehensive experimental results show the effectiveness of our proposed method.
Vehicle Re-Identification (ReID) is of great significance for public security and intelligent transportation. Large and comprehensive datasets are crucial for the development of vehicle ReID in model training and evaluation. However, existing datasets in this field have limitations in many aspects, including the constrained capture conditions, limited variation of vehicle appearances, and small scale of training and test set, etc. Hence, a new, large, and challenging benchmark for vehicle ReID is urgently needed. In this paper, we propose a large vehicle ReID dataset, called VERI-Wild 2.0, containing 825,042 images. It is captured using a city-scale surveillance camera system, consisting of 274 cameras covering a very large area over 200 <inline-formula><tex-math notation="LaTeX">km^2</tex-math> <mml:math><mml:mrow><mml:mi>k</mml:mi><mml:msup><mml:mi>m</mml:mi><mml:mn>2</mml:mn></mml:msup></mml:mrow></mml:math><inline-graphic xlink:href="duan-ieq1-3099253.gif"/> </inline-formula>. Specifically, the samples in our dataset present very rich appearance diversities thanks to the long time span collecting settings, unconstrained capturing viewpoints, various illumination conditions, diversified background environments, and different weather conditions. Furthermore, to facilitate more practical benchmarking, we define a challenging and large test set containing about 400K vehicle images that do not have any camera overlap with the training set. VERI-Wild 2.0 is expected to be able to facilitate the design, adaptation, development, and evaluation of different types of learning models for vehicle ReID. Besides, we also design a new method for vehicle ReID. We observe that orientation is a crucial factor for feature matching in vehicle ReID. To match vehicle pairs captured from similar orientations, the learned features are expected to capture specific detailed differential information for discriminating the visually similar yet different vehicles. In contrast, features are desired to capture the orientation invariant common information when matching samples captured from different orientations. Thus a novel disentangled feature learning network (DFNet) is proposed. It explicitly considers the orientation information for vehicle ReID, and concurrently learns the orientation specific and orientation common features that thus can be adaptively exploited via an adaptive matching scheme when dealing with matching pairs from similar or different orientations. The comprehensive experimental results show the effectiveness of our proposed method.
Vehicle Re-Identification (ReID) is of great significance for public security and intelligent transportation. Large and comprehensive datasets are crucial for the development of vehicle ReID in model training and evaluation. However, existing datasets in this field have limitations in many aspects, including the constrained capture conditions, limited variation of vehicle appearances, and small scale of training and test set, etc. Hence, a new, large, and challenging benchmark for vehicle ReID is urgently needed. In this paper, we propose a large vehicle ReID dataset, called VERI-Wild 2.0, containing 825,042 images. It is captured using a city-scale surveillance camera system, consisting of 274 cameras covering a very large area over 200 km2. Specifically, the samples in our dataset present very rich appearance diversities thanks to the long time span collecting settings, unconstrained capturing viewpoints, various illumination conditions, diversified background environments, and different weather conditions. Furthermore, to facilitate more practical benchmarking, we define a challenging and large test set containing about 400K vehicle images that do not have any camera overlap with the training set. VERI-Wild 2.0 is expected to be able to facilitate the design, adaptation, development, and evaluation of different types of learning models for vehicle ReID. Besides, we also design a new method for vehicle ReID. We observe that orientation is a crucial factor for feature matching in vehicle ReID. To match vehicle pairs captured from similar orientations, the learned features are expected to capture specific detailed differential information for discriminating the visually similar yet different vehicles. In contrast, features are desired to capture the orientation invariant common information when matching samples captured from different orientations. Thus a novel disentangled feature learning network (DFNet) is proposed. It explicitly considers the orientation information for vehicle ReID, and concurrently learns the orientation specific and orientation common features that thus can be adaptively exploited via an adaptive matching scheme when dealing with matching pairs from similar or different orientations. The comprehensive experimental results show the effectiveness of our proposed method.Vehicle Re-Identification (ReID) is of great significance for public security and intelligent transportation. Large and comprehensive datasets are crucial for the development of vehicle ReID in model training and evaluation. However, existing datasets in this field have limitations in many aspects, including the constrained capture conditions, limited variation of vehicle appearances, and small scale of training and test set, etc. Hence, a new, large, and challenging benchmark for vehicle ReID is urgently needed. In this paper, we propose a large vehicle ReID dataset, called VERI-Wild 2.0, containing 825,042 images. It is captured using a city-scale surveillance camera system, consisting of 274 cameras covering a very large area over 200 km2. Specifically, the samples in our dataset present very rich appearance diversities thanks to the long time span collecting settings, unconstrained capturing viewpoints, various illumination conditions, diversified background environments, and different weather conditions. Furthermore, to facilitate more practical benchmarking, we define a challenging and large test set containing about 400K vehicle images that do not have any camera overlap with the training set. VERI-Wild 2.0 is expected to be able to facilitate the design, adaptation, development, and evaluation of different types of learning models for vehicle ReID. Besides, we also design a new method for vehicle ReID. We observe that orientation is a crucial factor for feature matching in vehicle ReID. To match vehicle pairs captured from similar orientations, the learned features are expected to capture specific detailed differential information for discriminating the visually similar yet different vehicles. In contrast, features are desired to capture the orientation invariant common information when matching samples captured from different orientations. Thus a novel disentangled feature learning network (DFNet) is proposed. It explicitly considers the orientation information for vehicle ReID, and concurrently learns the orientation specific and orientation common features that thus can be adaptively exploited via an adaptive matching scheme when dealing with matching pairs from similar or different orientations. The comprehensive experimental results show the effectiveness of our proposed method.
Author Duan, Ling-Yu
Bai, Yan
Liu, Jun
Lou, Yihang
Wang, Ce
Author_xml – sequence: 1
  givenname: Yan
  orcidid: 0000-0002-2152-9611
  surname: Bai
  fullname: Bai, Yan
  email: yanbai@pku.edu.cn
  organization: Institute of Digital Media, Peking University, Beijing, China
– sequence: 2
  givenname: Jun
  orcidid: 0000-0002-4365-4165
  surname: Liu
  fullname: Liu, Jun
  email: jun_liu@sutd.edu.sg
  organization: Information Systems Technology and Design Pillar, Singapore University of Technology and Design, Singapore, Singapore
– sequence: 3
  givenname: Yihang
  orcidid: 0000-0002-8143-389X
  surname: Lou
  fullname: Lou, Yihang
  email: louyihang1@huawei.com
  organization: GoTen AI Lab, Intelligent Vision Department, Huawei Technologies Co., Ltd., Beijing, China
– sequence: 4
  givenname: Ce
  orcidid: 0000-0002-2448-4789
  surname: Wang
  fullname: Wang, Ce
  email: wce@pku.edu.cn
  organization: Institute of Digital Media, Peking University, Beijing, China
– sequence: 5
  givenname: Ling-Yu
  orcidid: 0000-0002-4491-2023
  surname: Duan
  fullname: Duan, Ling-Yu
  email: lingyu@pku.edu.cn
  organization: Institute of Digital Media, Peking University, Beijing, China
BookMark eNp9kUtvUzEQhS1URNPCH4CNJTZsbrDHj2svS6AQKTyEClvLOOPG5cZO7RsQ_56bpmLRBatZzDlnHt8ZOcklIyHPOZtzzuzrqy8XH5dzYMDnglkLSjwiM-CadRYsnJAZ4xo6Y8CckrPWbhjjUjHxhJwKKTgDY2fEv00N8-jz9YBreol-3FekK_Q1p3xNP-H4u9Sf1Oc19XRRtruKG8wt_UL6BnPYbP3UjaXS77hJYUD6FbvlekpMMQU_ppKfksfRDw2f3ddz8u3y3dXiQ7f6_H65uFh1QYAZO8XXgXMwShrVx973DFX8AX0MPEqMCiMEqbmyOirQ0sQAPWqQhkUfGZPinLw65u5qud1jG902tYDD4DOWfXOglNLCWKsm6csH0puyr3nazkHPpbFC64MKjqpQS2sVo9vVNN37x3HmDgDcHQB3AODuAUwm88AU0nj3h7H6NPzf-uJoTYj4b5aVVnHDxV_WlJMm
CODEN ITPIDJ
CitedBy_id crossref_primary_10_1109_TMM_2024_3400675
crossref_primary_10_3390_s23115152
crossref_primary_10_3390_app14114929
crossref_primary_10_1109_TNSE_2022_3199919
crossref_primary_10_1109_OJITS_2025_3538037
crossref_primary_10_1109_TITS_2023_3257873
crossref_primary_10_1109_TITS_2023_3314213
crossref_primary_10_1016_j_knosys_2024_111455
crossref_primary_10_1016_j_neucom_2022_07_062
crossref_primary_10_1109_TITS_2022_3219593
crossref_primary_10_1109_TMM_2021_3134839
crossref_primary_10_3390_math10193679
crossref_primary_10_1007_s10489_022_03349_y
crossref_primary_10_1109_TITS_2023_3308138
crossref_primary_10_1109_TITS_2024_3455330
crossref_primary_10_1109_TCSVT_2023_3326375
crossref_primary_10_1109_TIP_2023_3326691
crossref_primary_10_1007_s11263_023_01873_z
crossref_primary_10_1016_j_engappai_2024_109568
crossref_primary_10_1109_JIOT_2024_3471673
crossref_primary_10_1109_TIM_2023_3285978
crossref_primary_10_1063_5_0183408
crossref_primary_10_1109_TIV_2023_3347267
crossref_primary_10_1109_TITS_2023_3316068
crossref_primary_10_3390_app14103968
Cites_doi 10.1109/ICCV.2017.94
10.1109/TITS.2016.2639020
10.1007/978-3-319-46475-6_53
10.1109/ICCV.2017.405
10.1109/CVPR.2016.238
10.1109/CVPR42600.2020.00713
10.1109/CVPR.2019.00224
10.1007/978-3-7908-2604-3_16
10.1109/CVPR.2016.90
10.1109/CVPR.2017.690
10.1109/ICIP.2017.8296962
10.1109/CVPR.2019.00335
10.1109/CVPR.2016.328
10.1146/annurev.neuro.26.041002.131047
10.24963/ijcai.2020/66
10.1109/IJCNN.2019.8852059
10.1109/CVPR.2019.00412
10.1109/CVPR.2016.308
10.1109/CVPR.2018.00916
10.1109/TIP.2019.2910408
10.1109/ICCV.2017.244
10.1109/ICME.2016.7553002
10.1109/ICCV.2017.68
10.1109/ICCV.2019.00030
10.1109/ICCV.2017.629
10.1109/ICCV.2019.00837
10.1109/ICCV.2017.180
10.1109/ICCV.2017.310
10.1109/TMM.2019.2958756
10.1109/ITA.2018.8503149
10.1109/CVPR.2018.00018
10.1109/CVPR.2015.7299023
10.1609/aaai.v32i1.12237
10.1109/CVPR.2018.00222
10.1109/CVPR.2018.00016
10.1109/CVPR.2019.00505
10.1109/TMM.2018.2796240
10.1109/TIP.2019.2902112
10.1109/WACV.2018.00077
10.1609/aaai.v34i07.6774
10.1109/CVPR.2015.7298682
10.1109/CVPR.2019.00900
10.1109/CVPR.2018.00702
10.1007/978-3-030-58568-6_22
10.1109/ACCESS.2019.2948965
10.1109/CVPR.2017.19
10.1109/CVPRW.2019.00190
10.1109/CVPR.2017.141
10.1109/ICCV.2017.49
10.1109/TITS.2021.3086142
10.1109/TIP.2019.2950796
10.1109/CVPRW50498.2020.00319
10.1109/CVPR.2018.00679
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022
DBID 97E
RIA
RIE
AAYXX
CITATION
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
7X8
DOI 10.1109/TPAMI.2021.3099253
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005-present
IEEE All-Society Periodicals Package (ASPP) 1998-Present
IEEE Electronic Library (IEL)
CrossRef
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
MEDLINE - Academic
DatabaseTitle CrossRef
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
MEDLINE - Academic
DatabaseTitleList Technology Research Database

MEDLINE - Academic
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
Computer Science
EISSN 2160-9292
1939-3539
EndPage 6871
ExternalDocumentID 10_1109_TPAMI_2021_3099253
9495181
Genre orig-research
GrantInformation_xml – fundername: Ng Teng Fong Charitable Foundation
– fundername: National Natural Science Foundation of China
  grantid: 62088102; U1611461
  funderid: 10.13039/501100001809
– fundername: PKU-NTU Joint Research Institute
GroupedDBID ---
-DZ
-~X
.DC
0R~
29I
4.4
53G
5GY
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACGFO
ACGFS
ACIWK
ACNCT
AENEX
AGQYO
AHBIQ
AKJIK
AKQYR
ALMA_UNASSIGNED_HOLDINGS
ASUFR
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
E.L
EBS
EJD
F5P
HZ~
IEDLZ
IFIPE
IPLJI
JAVBF
LAI
M43
MS~
O9-
OCL
P2P
PQQKQ
RIA
RIE
RNS
RXW
TAE
TN5
UHB
~02
AAYXX
CITATION
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
7X8
ID FETCH-LOGICAL-c328t-51dc112854857f7a70e5fb27fc1f4ef5ef2c461596f52648fc27e62480faf0043
IEDL.DBID RIE
ISSN 0162-8828
1939-3539
IngestDate Fri Jul 11 03:35:00 EDT 2025
Mon Jun 30 06:04:30 EDT 2025
Thu Apr 24 23:00:08 EDT 2025
Tue Jul 01 03:18:26 EDT 2025
Wed Aug 27 02:18:57 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 10
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c328t-51dc112854857f7a70e5fb27fc1f4ef5ef2c461596f52648fc27e62480faf0043
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ORCID 0000-0002-4365-4165
0000-0002-4491-2023
0000-0002-2152-9611
0000-0002-2448-4789
0000-0002-8143-389X
PMID 34310289
PQID 2714893665
PQPubID 85458
PageCount 18
ParticipantIDs crossref_primary_10_1109_TPAMI_2021_3099253
ieee_primary_9495181
crossref_citationtrail_10_1109_TPAMI_2021_3099253
proquest_miscellaneous_2555638995
proquest_journals_2714893665
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2022-10-01
PublicationDateYYYYMMDD 2022-10-01
PublicationDate_xml – month: 10
  year: 2022
  text: 2022-10-01
  day: 01
PublicationDecade 2020
PublicationPlace New York
PublicationPlace_xml – name: New York
PublicationTitle IEEE transactions on pattern analysis and machine intelligence
PublicationTitleAbbrev TPAMI
PublicationYear 2022
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref57
ref12
ref56
ref59
ref58
ref53
Ma (ref36)
ref52
ref11
ref55
ref17
ref16
ref19
ref18
ref51
ref50
ref46
ref45
ref48
ref47
ref41
ref44
ref43
ref49
Mathieu (ref38)
ref8
ref7
ref4
Pirazh (ref42)
ref3
ref6
ref5
Eom (ref13)
Ge (ref14)
ref40
Chen (ref10)
ref35
ref34
Goodfellow (ref15)
ref37
ref31
ref30
ref33
ref32
ref2
ref1
ref39
Xu (ref54) 2015
ref23
ref26
ref25
Kanacı (ref22)
ref64
ref63
ref21
ref65
ref28
ref27
Kingma (ref24)
ref29
Chen (ref9) 2019
Ioffe (ref20) 2015
ref60
ref62
ref61
References_xml – ident: ref58
  doi: 10.1109/ICCV.2017.94
– ident: ref8
  doi: 10.1109/TITS.2016.2639020
– ident: ref29
  doi: 10.1007/978-3-319-46475-6_53
– ident: ref62
  doi: 10.1109/ICCV.2017.405
– ident: ref27
  doi: 10.1109/CVPR.2016.238
– ident: ref39
  doi: 10.1109/CVPR42600.2020.00713
– ident: ref61
  doi: 10.1109/CVPR.2019.00224
– ident: ref7
  doi: 10.1007/978-3-7908-2604-3_16
– start-page: 2672
  volume-title: Proc. Neural Inf. Process. Syst.
  ident: ref15
  article-title: Generative adversarial nets
– ident: ref19
  doi: 10.1109/CVPR.2016.90
– ident: ref43
  doi: 10.1109/CVPR.2017.690
– ident: ref53
  doi: 10.1109/ICIP.2017.8296962
– ident: ref33
  doi: 10.1109/CVPR.2019.00335
– ident: ref46
  doi: 10.1109/CVPR.2016.328
– ident: ref3
  doi: 10.1146/annurev.neuro.26.041002.131047
– ident: ref4
  doi: 10.24963/ijcai.2020/66
– start-page: 2172
  volume-title: Proc. Neural Inf. Process. Syst.
  ident: ref10
  article-title: InfoGAN: Interpretable representation learning by information maximizing generative adversarial nets
– ident: ref25
  doi: 10.1109/IJCNN.2019.8852059
– ident: ref18
  doi: 10.1109/CVPR.2019.00412
– ident: ref47
  doi: 10.1109/CVPR.2016.308
– ident: ref11
  doi: 10.1109/CVPR.2018.00916
– ident: ref17
  doi: 10.1109/TIP.2019.2910408
– ident: ref65
  doi: 10.1109/ICCV.2017.244
– ident: ref28
  doi: 10.1109/ICME.2016.7553002
– ident: ref55
  doi: 10.1109/ICCV.2017.68
– start-page: 184
  year: 2019
  ident: ref9
  article-title: Partition and reunion: A two-branch neural network for vehicle re-identification
  publication-title: Comput. Vis. Pattern Recognit. Workshops
– start-page: 405
  volume-title: Proc. Neural Inf. Process. Syst.
  ident: ref36
  article-title: Pose guided person image generation
– ident: ref48
  doi: 10.1109/ICCV.2019.00030
– ident: ref59
  doi: 10.1109/ICCV.2017.629
– ident: ref12
  doi: 10.1109/ICCV.2019.00837
– ident: ref40
  doi: 10.1109/ICCV.2017.180
– ident: ref57
  doi: 10.1109/ICCV.2017.310
– start-page: 772
  volume-title: Proc. BMVC AMMDS Workshop
  ident: ref22
  article-title: Vehicle reidentification by fine-grained cross-level deep learning
– ident: ref35
  doi: 10.1109/TMM.2019.2958756
– ident: ref1
  doi: 10.1109/ITA.2018.8503149
– ident: ref37
  doi: 10.1109/CVPR.2018.00018
– start-page: 5040
  volume-title: Proc. Neural Inf. Process. Syst.
  ident: ref38
  article-title: Disentangling factors of variation in deep representation using adversarial training
– ident: ref56
  doi: 10.1109/CVPR.2015.7299023
– start-page: 1
  volume-title: Proc. 2nd Int. Conf. Learn. Representations
  ident: ref24
  article-title: Auto-encoding variational Bayes
– ident: ref16
  doi: 10.1609/aaai.v32i1.12237
– ident: ref31
  doi: 10.1109/CVPR.2018.00222
– ident: ref52
  doi: 10.1109/CVPR.2018.00016
– ident: ref60
  doi: 10.1109/CVPR.2019.00505
– ident: ref5
  doi: 10.1109/TMM.2018.2796240
– start-page: 1229
  volume-title: Proc. Neural Inf. Process. Syst.
  ident: ref14
  article-title: Fd-GAN: Pose-guided feature distilling GAN for robust person re-identification
– ident: ref32
  doi: 10.1109/TIP.2019.2902112
– start-page: 6131
  volume-title: Proc. IEEE Int. Conf. Comput. Vis.
  ident: ref42
  article-title: A dual path modelwith adaptive attention for vehicle re-identification
– ident: ref63
  doi: 10.1109/WACV.2018.00077
– ident: ref21
  doi: 10.1609/aaai.v34i07.6774
– ident: ref44
  doi: 10.1109/CVPR.2015.7298682
– ident: ref49
  doi: 10.1109/CVPR.2019.00900
– start-page: 5297
  volume-title: Proc. 33rd Int. Conf. Neural Inf. Process. Syst.
  ident: ref13
  article-title: Learning disentangled representation for robust person re-identification
– ident: ref6
  doi: 10.1109/CVPR.2018.00702
– ident: ref23
  doi: 10.1007/978-3-030-58568-6_22
– ident: ref2
  doi: 10.1109/ACCESS.2019.2948965
– ident: ref26
  doi: 10.1109/CVPR.2017.19
– ident: ref34
  doi: 10.1109/CVPRW.2019.00190
– ident: ref50
  doi: 10.1109/CVPR.2017.141
– year: 2015
  ident: ref20
  article-title: Batch normalization: Accelerating deep network training by reducing internal covariate shift
– ident: ref51
  doi: 10.1109/ICCV.2017.49
– start-page: 2048
  year: 2015
  ident: ref54
  article-title: Show, attend and tell: Neural image caption generation with visual attention
  publication-title: Comput. Sci.
– ident: ref45
  doi: 10.1109/TITS.2021.3086142
– ident: ref30
  doi: 10.1109/TIP.2019.2950796
– ident: ref41
  doi: 10.1109/CVPRW50498.2020.00319
– ident: ref64
  doi: 10.1109/CVPR.2018.00679
SSID ssj0014503
Score 2.537558
Snippet Vehicle Re-Identification (ReID) is of great significance for public security and intelligent transportation. Large and comprehensive datasets are crucial for...
SourceID proquest
crossref
ieee
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 6854
SubjectTerms Benchmark testing
Benchmarks
Cameras
Datasets
disentangled learning
Feature extraction
Learning
Lighting
Matching
Meteorology
Orientation
Surveillance
Test sets
Training
vehicle dataset
Vehicle re-identification
Weather
Title Disentangled Feature Learning Network and a Comprehensive Benchmark for Vehicle Re-Identification
URI https://ieeexplore.ieee.org/document/9495181
https://www.proquest.com/docview/2714893665
https://www.proquest.com/docview/2555638995
Volume 44
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lb9QwEB6VnuBAoQWxUJCRuEG2iR_x5lgeVUHaCqEW9RY5zriLKFm0bC78emYcJ-IlxC2SnUmi8cTfeOabAXhWelUVgU9tjDKZli7PHBeCbBrfOq88ukikXZ6Vpxf63aW53IEXExcGEWPyGc75Msby27Xv-ajsqCI0XzDP-gY5bgNXa4oYaBO7IBOCIQsnN2IkyOTV0fn74-VbcgVlMVf0HtJw8xxFOydH2X7Zj2KDlT_-ynGrOdmD5fiSQ4bJ53m_beb--2_1G__3K-7A7YQ5xfGwSO7CDnb7sDf2cxDJvPfh1k_FCQ_Avf4UiUnd1TW2gqFiv0GR6rFeibMhf1y4rhVOsLANroZsePGS5K2-OBolSCw-4oqfKz5gNtCCQzonvAcXJ2_OX51mqSFD5pVcbDNTtJ7w2YK8HGODdTZHExppgy-CxmAwSK8JIlVlMJw5F7y0WEq9yIMLHHO8D7vdusMHINomaLtQaNHlGivVhKogmd6WuWp1k8-gGNVS-1StnJtmXNfRa8mrOmq1Zq3WSaszeD7d83Wo1fHP2Qesm2lmUssMDkft18mcv9XSFlykpyzNDJ5Ow2SIHF1xHa57mmO41hq5r-bh3yU_gpuSuRMxE_AQdrebHh8Totk2T-JS_gFObe_k
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lb9QwEB5V5QAcaGlBLBQwEjfINrHjeHMshWoL3RVCW9Rb5DjjLqJk0bK59Nd3xnmIlxC3SHYmicYTf-OZ-QbgZeZUnng-tdFKR6m0cWSZCLIsXWWdcmhDIe1snk3P0_cX-mILXg-1MIgYks9wzJchll-tXMNHZYc5ofmE66xv0b6vk7Zaa4gZpDr0QSYMQzZOjkRfIhPnh4uPR7NTcgZlMlb0JlJz-xxFeyfH2X7ZkUKLlT_-y2GzOdmBWf-abY7J13GzKcfu-jcGx__9jl2416FOcdQuk_uwhfUe7PQdHURn4Htw9yd6wn2wb7-E0qT68gorwWCxWaPoGFkvxbzNIBe2roQVLGyNyzYfXrwhectvlkYJFIvPuOTnik8YtYXBvjspfADnJ-8Wx9Ooa8kQOSUnm0gnlSOENiE_RxtvrIlR-1Ia7xKfotfopUsJJOWZ15w75500mMl0EnvrOer4ELbrVY2PQFSlT81EoUEbp5ir0ucJyXQmi1WVlvEIkl4thev4yrltxlUR_JY4L4JWC9Zq0Wl1BK-Ge763bB3_nL3PuhlmdmoZwUGv_aIz6B-FNAnT9GSZHsGLYZhMkeMrtsZVQ3M0s62RA6sf_13yc7g9XczOirPT-YcncEdyJUXICzyA7c26waeEbzbls7CsbwD5EvMt
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Disentangled+Feature+Learning+Network+and+a+Comprehensive+Benchmark+for+Vehicle+Re-Identification&rft.jtitle=IEEE+transactions+on+pattern+analysis+and+machine+intelligence&rft.au=Bai%2C+Yan&rft.au=Liu%2C+Jun&rft.au=Lou%2C+Yihang&rft.au=Wang%2C+Ce&rft.date=2022-10-01&rft.pub=IEEE&rft.issn=0162-8828&rft.volume=44&rft.issue=10&rft.spage=6854&rft.epage=6871&rft_id=info:doi/10.1109%2FTPAMI.2021.3099253&rft_id=info%3Apmid%2F34310289&rft.externalDocID=9495181
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0162-8828&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0162-8828&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0162-8828&client=summon