Deep Ranking for Person Re-Identification via Joint Representation Learning

This paper proposes a novel approach to person re-identification, a fundamental task in distributed multi-camera surveillance systems. Although a variety of powerful algorithms have been presented in the past few years, most of them usually focus on designing hand-crafted features and learning metri...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on image processing Vol. 25; no. 5; pp. 2353 - 2367
Main Authors Chen, Shi-Zhe, Guo, Chun-Chao, Lai, Jian-Huang
Format Journal Article
LanguageEnglish
Published United States IEEE 01.05.2016
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
Abstract This paper proposes a novel approach to person re-identification, a fundamental task in distributed multi-camera surveillance systems. Although a variety of powerful algorithms have been presented in the past few years, most of them usually focus on designing hand-crafted features and learning metrics either individually or sequentially. Different from previous works, we formulate a unified deep ranking framework that jointly tackles both of these key components to maximize their strengths. We start from the principle that the correct match of the probe image should be positioned in the top rank within the whole gallery set. An effective learning-to-rank algorithm is proposed to minimize the cost corresponding to the ranking disorders of the gallery. The ranking model is solved with a deep convolutional neural network (CNN) that builds the relation between input image pairs and their similarity scores through joint representation learning directly from raw image pixels. The proposed framework allows us to get rid of feature engineering and does not rely on any assumption. An extensive comparative evaluation is given, demonstrating that our approach significantly outperforms all the state-of-the-art approaches, including both traditional and CNN-based methods on the challenging VIPeR, CUHK-01, and CAVIAR4REID datasets. In addition, our approach has better ability to generalize across datasets without fine-tuning.
AbstractList This paper proposes a novel approach to person re-identification, a fundamental task in distributed multi-camera surveillance systems. Although a variety of powerful algorithms have been presented in the past few years, most of them usually focus on designing hand-crafted features and learning metrics either individually or sequentially. Different from previous works, we formulate a unified deep ranking framework that jointly tackles both of these key components to maximize their strengths. We start from the principle that the correct match of the probe image should be positioned in the top rank within the whole gallery set. An effective learning-to-rank algorithm is proposed to minimize the cost corresponding to the ranking disorders of the gallery. The ranking model is solved with a deep convolutional neural network (CNN) that builds the relation between input image pairs and their similarity scores through joint representation learning directly from raw image pixels. The proposed framework allows us to get rid of feature engineering and does not rely on any assumption. An extensive comparative evaluation is given, demonstrating that our approach significantly outperforms all the state-of-the-art approaches, including both traditional and CNN-based methods on the challenging VIPeR, CUHK-01, and CAVIAR4REID datasets. In addition, our approach has better ability to generalize across datasets without fine-tuning.
This paper proposes a novel approach to person re-identification, a fundamental task in distributed multi-camera surveillance systems. Although a variety of powerful algorithms have been presented in the past few years, most of them usually focus on designing hand-crafted features and learning metrics either individually or sequentially. Different from previous works, we formulate a unified deep ranking framework that jointly tackles both of these key components to maximize their strengths. We start from the principle that the correct match of the probe image should be positioned in the top rank within the whole gallery set. An effective learning-to-rank algorithm is proposed to minimize the cost corresponding to the ranking disorders of the gallery. The ranking model is solved with a deep convolutional neural network (CNN) that builds the relation between input image pairs and their similarity scores through joint representation learning directly from raw image pixels. The proposed framework allows us to get rid of feature engineering and does not rely on any assumption. An extensive comparative evaluation is given, demonstrating that our approach significantly outperforms all the state-of-the-art approaches, including both traditional and CNN-based methods on the challenging VIPeR, CUHK-01, and CAVIAR4REID datasets. In addition, our approach has better ability to generalize across datasets without fine-tuning.This paper proposes a novel approach to person re-identification, a fundamental task in distributed multi-camera surveillance systems. Although a variety of powerful algorithms have been presented in the past few years, most of them usually focus on designing hand-crafted features and learning metrics either individually or sequentially. Different from previous works, we formulate a unified deep ranking framework that jointly tackles both of these key components to maximize their strengths. We start from the principle that the correct match of the probe image should be positioned in the top rank within the whole gallery set. An effective learning-to-rank algorithm is proposed to minimize the cost corresponding to the ranking disorders of the gallery. The ranking model is solved with a deep convolutional neural network (CNN) that builds the relation between input image pairs and their similarity scores through joint representation learning directly from raw image pixels. The proposed framework allows us to get rid of feature engineering and does not rely on any assumption. An extensive comparative evaluation is given, demonstrating that our approach significantly outperforms all the state-of-the-art approaches, including both traditional and CNN-based methods on the challenging VIPeR, CUHK-01, and CAVIAR4REID datasets. In addition, our approach has better ability to generalize across datasets without fine-tuning.
Author Jian-Huang Lai
Chun-Chao Guo
Shi-Zhe Chen
Author_xml – sequence: 1
  givenname: Shi-Zhe
  surname: Chen
  fullname: Chen, Shi-Zhe
– sequence: 2
  givenname: Chun-Chao
  surname: Guo
  fullname: Guo, Chun-Chao
– sequence: 3
  givenname: Jian-Huang
  surname: Lai
  fullname: Lai, Jian-Huang
BackLink https://www.ncbi.nlm.nih.gov/pubmed/27019494$$D View this record in MEDLINE/PubMed
BookMark eNqNkc1P3DAQxa0KVD7aO1KlKhKXXrLM-CO2jxXQsmWlIkTPlkkmyHTX2dpZJP77GrJw4FD1ZMvv90bj9w7YThwiMXaEMEMEe3Izv5pxwGbGlVSW23dsH63EGkDynXIHpWuN0u6xg5zvAVAqbN6zPa6hcFbus8szonV17ePvEO-qfkjVFaU8xOqa6nlHcQx9aP0YystD8NWPIcSxaOtEuYiTsCCfYrF_YLu9X2b6uD0P2a9v5zenF_Xi5_f56ddF3QoDY616MmCNpLKOFhJVA-ZWIZHyuunAIopWgxIaPCeynDoB1pumIaWhU0ocsi_T3HUa_mwoj24VckvLpY80bLJDw5VqrBX6P1DQXGsDsqDHb9D7YZNi-YhDbXTJG58Hft5Sm9sVdW6dwsqnR_eSaAGaCWjTkHOi3rVhymlMPiwdgnuqzpXq3FN1bltdMcIb48vsf1g-TZZARK-4lsIabsRfpfGf3A
CODEN IIPRE4
CitedBy_id crossref_primary_10_1117_1_JEI_27_2_023006
crossref_primary_10_1016_j_neucom_2021_01_010
crossref_primary_10_1007_s11042_023_16286_w
crossref_primary_10_1007_s11042_018_6042_1
crossref_primary_10_1016_j_jvcir_2019_06_001
crossref_primary_10_1007_s40747_022_00699_5
crossref_primary_10_1109_TIP_2019_2915655
crossref_primary_10_1109_TIP_2023_3290515
crossref_primary_10_1109_ACCESS_2020_3024120
crossref_primary_10_1007_s00138_021_01239_w
crossref_primary_10_1016_j_neucom_2017_12_042
crossref_primary_10_1016_j_patcog_2023_109669
crossref_primary_10_1109_JIOT_2019_2960549
crossref_primary_10_1016_j_patcog_2017_08_029
crossref_primary_10_1109_ACCESS_2019_2898906
crossref_primary_10_1177_1729881419858162
crossref_primary_10_1109_TCSVT_2018_2869898
crossref_primary_10_1016_j_adhoc_2019_102018
crossref_primary_10_1016_j_patcog_2018_01_033
crossref_primary_10_1007_s11042_018_6408_4
crossref_primary_10_1016_j_neunet_2019_06_004
crossref_primary_10_1016_j_patcog_2017_06_026
crossref_primary_10_1109_ACCESS_2021_3100571
crossref_primary_10_1007_s11263_018_1105_3
crossref_primary_10_1109_TMM_2018_2877886
crossref_primary_10_1109_JIOT_2020_2980549
crossref_primary_10_1016_j_neucom_2018_03_073
crossref_primary_10_1109_TCSVT_2017_2734740
crossref_primary_10_1109_ACCESS_2019_2939071
crossref_primary_10_1007_s11432_016_9086_8
crossref_primary_10_1016_j_neucom_2019_05_037
crossref_primary_10_1109_TIP_2019_2959923
crossref_primary_10_1109_TPAMI_2020_3009758
crossref_primary_10_1109_TIP_2017_2765836
crossref_primary_10_1109_TNNLS_2020_3029299
crossref_primary_10_1109_TMM_2018_2806224
crossref_primary_10_3390_rs16050775
crossref_primary_10_1016_j_neucom_2017_12_027
crossref_primary_10_1155_2017_9874345
crossref_primary_10_1088_1742_6596_1187_4_042111
crossref_primary_10_1109_TIP_2018_2818438
crossref_primary_10_1016_j_neucom_2018_07_081
crossref_primary_10_1109_TPAMI_2019_2928294
crossref_primary_10_1007_s00521_019_04424_1
crossref_primary_10_1016_j_cogsys_2018_04_003
crossref_primary_10_1109_TNNLS_2021_3123968
crossref_primary_10_1109_TCYB_2019_2916158
crossref_primary_10_1109_TIP_2021_3082298
crossref_primary_10_1016_j_patrec_2017_10_032
crossref_primary_10_1016_j_cviu_2017_11_009
crossref_primary_10_1007_s11042_020_09997_x
crossref_primary_10_1049_iet_bmt_2016_0200
crossref_primary_10_1109_TITS_2017_2784486
crossref_primary_10_1109_TMM_2020_3011317
crossref_primary_10_1016_j_neucom_2017_02_085
crossref_primary_10_1109_TMM_2017_2755983
crossref_primary_10_1016_j_neucom_2019_12_094
crossref_primary_10_1109_TIP_2019_2928126
crossref_primary_10_1016_j_neucom_2017_02_003
crossref_primary_10_1016_j_neucom_2018_04_013
crossref_primary_10_1007_s11042_021_10953_6
crossref_primary_10_1109_TCYB_2019_2917713
crossref_primary_10_1109_ACCESS_2019_2914670
crossref_primary_10_3390_e21050449
crossref_primary_10_1016_j_neucom_2018_04_019
crossref_primary_10_1109_TCSVT_2017_2723429
crossref_primary_10_1016_j_patcog_2021_108138
crossref_primary_10_1007_s11042_017_4896_2
crossref_primary_10_1016_j_patrec_2018_04_029
crossref_primary_10_1109_TIP_2018_2870941
crossref_primary_10_1145_3089249
crossref_primary_10_1109_ACCESS_2018_2871149
crossref_primary_10_3390_electronics11131941
crossref_primary_10_1007_s13042_023_01993_5
crossref_primary_10_1109_TCSVT_2017_2748698
crossref_primary_10_1109_TIP_2021_3101158
crossref_primary_10_1117_1_JEI_28_3_033017
crossref_primary_10_1109_TCYB_2019_2909480
crossref_primary_10_1007_s42979_024_03271_9
crossref_primary_10_1109_TCYB_2017_2755044
crossref_primary_10_1109_TVT_2020_3043203
crossref_primary_10_1109_TMM_2020_3003779
crossref_primary_10_18178_joig_8_2_26_36
crossref_primary_10_1016_j_imavis_2017_12_005
crossref_primary_10_1007_s00138_018_0917_z
crossref_primary_10_1016_j_neucom_2017_09_064
crossref_primary_10_1109_ACCESS_2023_3283258
crossref_primary_10_1007_s11042_016_4070_2
crossref_primary_10_1109_TIP_2022_3229621
crossref_primary_10_1016_j_neucom_2022_10_080
crossref_primary_10_1109_ACCESS_2019_2960030
crossref_primary_10_1109_TII_2017_2767557
crossref_primary_10_1109_TPAMI_2018_2886878
crossref_primary_10_1109_TCSVT_2018_2865749
crossref_primary_10_1016_j_patcog_2017_03_023
crossref_primary_10_1145_3610298
crossref_primary_10_1016_j_patcog_2016_12_022
crossref_primary_10_1016_j_neucom_2017_09_019
crossref_primary_10_1016_j_sigpro_2017_07_015
crossref_primary_10_1109_TIP_2018_2859025
crossref_primary_10_1016_j_cviu_2017_04_003
crossref_primary_10_1016_j_neucom_2019_01_093
crossref_primary_10_1109_ACCESS_2020_2979164
crossref_primary_10_1016_j_jvcir_2018_12_003
crossref_primary_10_1016_j_neucom_2017_07_019
crossref_primary_10_1109_TIP_2017_2683063
crossref_primary_10_1016_j_knosys_2021_106941
crossref_primary_10_1109_TIP_2021_3050839
crossref_primary_10_3390_app12104921
crossref_primary_10_1109_TNNLS_2018_2861991
crossref_primary_10_1007_s11042_021_10671_z
crossref_primary_10_1016_j_neucom_2019_01_005
crossref_primary_10_1145_3369393
crossref_primary_10_1016_j_ins_2019_06_046
crossref_primary_10_1109_TPAMI_2017_2666805
crossref_primary_10_1016_j_patcog_2019_06_006
crossref_primary_10_1109_JSTARS_2021_3056198
crossref_primary_10_1109_TIP_2018_2851098
crossref_primary_10_1109_TIP_2019_2940684
crossref_primary_10_1016_j_cogsys_2020_10_002
crossref_primary_10_1109_TIP_2019_2914575
crossref_primary_10_1109_TCSVT_2020_3031303
crossref_primary_10_1117_1_JEI_27_3_033033
crossref_primary_10_1007_s10489_020_01844_8
Cites_doi 10.1109/CVPR.2012.6247987
10.1109/TKDE.2008.239
10.1145/1273496.1273523
10.1109/CVPR.2014.27
10.1007/978-3-642-33863-2_38
10.1145/2543581.2543596
10.1145/2502081.2502112
10.1109/TPAMI.2015.2453984
10.1109/TPAMI.2007.250598
10.5244/C.24.21
10.1109/ICPR.2014.609
10.1109/CVPR.2010.5539926
10.1109/CVPR.2012.6247939
10.1109/CVPR.2013.461
10.1109/CVPR.2013.426
10.1109/ICPR.2014.16
10.1007/978-1-4471-6296-4_1
10.1109/TCSVT.2014.2305511
10.1109/CVPR.2014.81
10.1109/ICCV.2013.314
10.1109/CVPR.2014.242
10.5244/C.25.68
10.1109/CVPR.2014.180
10.1145/2647868.2654889
10.1109/TPAMI.2014.2377748
10.1109/CVPR.2013.463
10.1109/CVPR.2014.214
10.1109/TNNLS.2015.2506664
10.1109/TPAMI.2012.246
10.1109/TIP.2014.2331755
10.1109/CVPR.2013.460
10.1109/TIP.2010.2052823
10.1109/ICCV.2013.443
10.1109/CVPR.2014.26
10.5244/C.26.57
10.1109/TPAMI.2012.138
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2016
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2016
DBID 97E
RIA
RIE
AAYXX
CITATION
CGR
CUY
CVF
ECM
EIF
NPM
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
7X8
F28
FR3
DOI 10.1109/TIP.2016.2545929
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005–Present
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
Medline
MEDLINE
MEDLINE (Ovid)
MEDLINE
MEDLINE
PubMed
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
MEDLINE - Academic
ANTE: Abstracts in New Technology & Engineering
Engineering Research Database
DatabaseTitle CrossRef
MEDLINE
Medline Complete
MEDLINE with Full Text
PubMed
MEDLINE (Ovid)
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
MEDLINE - Academic
Engineering Research Database
ANTE: Abstracts in New Technology & Engineering
DatabaseTitleList
MEDLINE - Academic
MEDLINE
Technology Research Database
Technology Research Database
Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: EIF
  name: MEDLINE
  url: https://proxy.k.utb.cz/login?url=https://www.webofscience.com/wos/medline/basic-search
  sourceTypes: Index Database
– sequence: 3
  dbid: RIE
  name: IEEE Xplore
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Applied Sciences
Engineering
EISSN 1941-0042
EndPage 2367
ExternalDocumentID 4046710791
27019494
10_1109_TIP_2016_2545929
7439828
Genre orig-research
Research Support, Non-U.S. Gov't
Journal Article
GrantInformation_xml – fundername: National Natural Science Foundation of China
  grantid: 61573387
  funderid: 10.13039/501100001809
– fundername: Guangdong Program
  grantid: 2015B010105005
– fundername: Guangzhou Program
  grantid: 201508010032
GroupedDBID ---
-~X
.DC
0R~
29I
4.4
53G
5GY
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABFSI
ABQJQ
ABVLG
ACGFO
ACGFS
ACIWK
AENEX
AETIX
AGQYO
AGSQL
AHBIQ
AI.
AIBXA
AKJIK
AKQYR
ALLEH
ALMA_UNASSIGNED_HOLDINGS
ASUFR
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
E.L
EBS
EJD
F5P
HZ~
H~9
ICLAB
IFIPE
IFJZH
IPLJI
JAVBF
LAI
M43
MS~
O9-
OCL
P2P
RIA
RIE
RNS
TAE
TN5
VH1
AAYOK
AAYXX
CITATION
RIG
CGR
CUY
CVF
ECM
EIF
NPM
PKN
Z5M
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
7X8
F28
FR3
ID FETCH-LOGICAL-c380t-5fe80984e01473415608b51ee5a76d09113c705370a2ee92ed309a866e570d553
IEDL.DBID RIE
ISSN 1057-7149
1941-0042
IngestDate Fri Jul 11 11:42:32 EDT 2025
Sun Aug 24 03:27:13 EDT 2025
Mon Jun 30 10:22:01 EDT 2025
Wed Feb 19 01:56:10 EST 2025
Tue Jul 01 02:03:07 EDT 2025
Thu Apr 24 23:11:51 EDT 2025
Wed Aug 27 02:58:24 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 5
Keywords deep convolutional neural network
Person re-identification
learning to rank
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c380t-5fe80984e01473415608b51ee5a76d09113c705370a2ee92ed309a866e570d553
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
PMID 27019494
PQID 1787110137
PQPubID 85429
PageCount 15
ParticipantIDs crossref_citationtrail_10_1109_TIP_2016_2545929
proquest_journals_1787110137
ieee_primary_7439828
proquest_miscellaneous_1825569937
proquest_miscellaneous_1807277804
crossref_primary_10_1109_TIP_2016_2545929
pubmed_primary_27019494
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2016-05-01
PublicationDateYYYYMMDD 2016-05-01
PublicationDate_xml – month: 05
  year: 2016
  text: 2016-05-01
  day: 01
PublicationDecade 2010
PublicationPlace United States
PublicationPlace_xml – name: United States
– name: New York
PublicationTitle IEEE transactions on image processing
PublicationTitleAbbrev TIP
PublicationTitleAlternate IEEE Trans Image Process
PublicationYear 2016
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref13
ref12
ref14
gray (ref9) 2008
ref52
weinberger (ref46) 2009; 10
ref10
sugiyama (ref37) 2007; 8
ref16
ref19
ref18
liu (ref11) 2012
yun (ref20) 2014
ref51
ref45
ref48
ref47
ref42
ref41
ref43
xiong (ref5) 2014
ref49
ref8
ref7
ref4
ref3
ref6
ref40
dikmen (ref15) 2010
lecun (ref21) 1990
ref34
krizhevsky (ref22) 2012
ref36
ref31
ref30
ref33
ref32
ref2
wei (ref50) 2014
ref1
ref39
boureau (ref44) 2010
ref38
sun (ref26) 2014
ref25
schroff (ref53) 2015
yang (ref35) 2014
li (ref17) 2012
ref28
ref27
chatfield (ref23) 2014
ref29
zeiler (ref24) 2014
References_xml – ident: ref4
  doi: 10.1109/CVPR.2012.6247987
– ident: ref43
  doi: 10.1109/TKDE.2008.239
– ident: ref47
  doi: 10.1145/1273496.1273523
– start-page: 391
  year: 2012
  ident: ref11
  article-title: Person re-identification: What features are important?
  publication-title: Proc ECCV Workshops Demonstrations
– ident: ref30
  doi: 10.1109/CVPR.2014.27
– ident: ref48
  doi: 10.1007/978-3-642-33863-2_38
– start-page: 2582
  year: 2014
  ident: ref20
  article-title: Ranking via robust binary classification
  publication-title: Proc Adv Neural Inf Process Syst (NIPS)
– ident: ref2
  doi: 10.1145/2543581.2543596
– ident: ref41
  doi: 10.1145/2502081.2502112
– ident: ref51
  doi: 10.1109/TPAMI.2015.2453984
– start-page: 396
  year: 1990
  ident: ref21
  article-title: Handwritten digit recognition with a back-propagation network
  publication-title: Proc Adv Neural Inf Process Syst (NIPS)
– ident: ref38
  doi: 10.1109/TPAMI.2007.250598
– ident: ref36
  doi: 10.5244/C.24.21
– ident: ref12
  doi: 10.1109/ICPR.2014.609
– ident: ref10
  doi: 10.1109/CVPR.2010.5539926
– ident: ref16
  doi: 10.1109/CVPR.2012.6247939
– year: 2014
  ident: ref50
  article-title: CNN: Single-label to multi-label
– ident: ref8
  doi: 10.1109/CVPR.2013.461
– ident: ref19
  doi: 10.1109/CVPR.2013.426
– ident: ref29
  doi: 10.1109/ICPR.2014.16
– ident: ref3
  doi: 10.1007/978-1-4471-6296-4_1
– start-page: 1
  year: 2014
  ident: ref5
  article-title: Person re-identification using kernel-based metric learning methods
  publication-title: Proc Eur Conf Comput Vis (ECCV)
– start-page: 818
  year: 2014
  ident: ref24
  article-title: Visualizing and understanding convolutional networks
  publication-title: Proc Eur Conf Comput Vis (ECCV)
– year: 2014
  ident: ref23
  article-title: Return of the devil in the details: Delving deep into convolutional nets
  publication-title: Proc Brit Mach Vis Conf (BMVC)
– ident: ref31
  doi: 10.1109/TCSVT.2014.2305511
– ident: ref25
  doi: 10.1109/CVPR.2014.81
– start-page: 262
  year: 2008
  ident: ref9
  article-title: Viewpoint invariant pedestrian recognition with an ensemble of localized features
  publication-title: Proc Eur Conf Comput Vis (ECCV)
– start-page: 501
  year: 2010
  ident: ref15
  article-title: Pedestrian recognition with a learned metric
  publication-title: Proc Asian Conf Comput Vis (ACCV)
– ident: ref14
  doi: 10.1109/ICCV.2013.314
– volume: 10
  start-page: 207
  year: 2009
  ident: ref46
  article-title: Distance metric learning for large margin nearest neighbor classification
  publication-title: J Mach Learn Res
– volume: 8
  start-page: 1027
  year: 2007
  ident: ref37
  article-title: Dimensionality reduction of multimodal labeled data by local Fisher discriminant analysis
  publication-title: J Mach Learn Res
– ident: ref40
  doi: 10.1109/CVPR.2014.242
– ident: ref32
  doi: 10.5244/C.25.68
– ident: ref42
  doi: 10.1109/CVPR.2014.180
– ident: ref45
  doi: 10.1145/2647868.2654889
– start-page: 31
  year: 2012
  ident: ref17
  article-title: Human reidentification with transferred metric learning
  publication-title: Proc Asian Conf Comput Vis (ACCV)
– start-page: 536
  year: 2014
  ident: ref35
  article-title: Salient color names for person re-identification
  publication-title: Proc Eur Conf Comput Vis (ECCV)
– ident: ref49
  doi: 10.1109/TPAMI.2014.2377748
– start-page: 1988
  year: 2014
  ident: ref26
  article-title: Deep learning face representation by joint identification-verification
  publication-title: Proc Adv Neural Inf Process Syst (NIPS)
– ident: ref18
  doi: 10.1109/CVPR.2013.463
– ident: ref28
  doi: 10.1109/CVPR.2014.214
– ident: ref27
  doi: 10.1109/TNNLS.2015.2506664
– start-page: 1097
  year: 2012
  ident: ref22
  article-title: ImageNet classification with deep convolutional neural networks
  publication-title: Proc Adv Neural Inf Process Syst (NIPS)
– start-page: 815
  year: 2015
  ident: ref53
  article-title: FaceNet: A unified embedding for face recognition and clustering
  publication-title: Proc IEEE Conf Comput Vis Pattern Recognit (CVPR)
– ident: ref34
  doi: 10.1109/TPAMI.2012.246
– ident: ref7
  doi: 10.1109/TIP.2014.2331755
– ident: ref13
  doi: 10.1109/CVPR.2013.460
– ident: ref1
  doi: 10.1109/TIP.2010.2052823
– ident: ref52
  doi: 10.1109/ICCV.2013.443
– ident: ref39
  doi: 10.1109/CVPR.2014.26
– ident: ref33
  doi: 10.5244/C.26.57
– ident: ref6
  doi: 10.1109/TPAMI.2012.138
– start-page: 111
  year: 2010
  ident: ref44
  article-title: A theoretical analysis of feature pooling in visual recognition
  publication-title: Proc Int Conf Machine Learn (ICML)
SSID ssj0014516
Score 2.6080844
Snippet This paper proposes a novel approach to person re-identification, a fundamental task in distributed multi-camera surveillance systems. Although a variety of...
SourceID proquest
pubmed
crossref
ieee
SourceType Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 2353
SubjectTerms Algorithm design and analysis
Algorithms
Biometric Identification - methods
Cameras
Databases, Factual
deep convolutional neural network
Feature extraction
Galleries
Humans
Image color analysis
Learning
learning to rank
Machine Learning
Measurement
Neural networks
Neural Networks (Computer)
Person re-identification
Probes
Ranking
Representations
Similarity
Title Deep Ranking for Person Re-Identification via Joint Representation Learning
URI https://ieeexplore.ieee.org/document/7439828
https://www.ncbi.nlm.nih.gov/pubmed/27019494
https://www.proquest.com/docview/1787110137
https://www.proquest.com/docview/1807277804
https://www.proquest.com/docview/1825569937
Volume 25
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LT9wwEB4Bp3KAlldDaWUkLpXIrrN-JD5WbRGlokIIJG5RHrPVqlV2BVkO_PrOOE5KUYt6S-RJ_Bg_vvHY3wAcFUmZGFQ6tmpSxLpUNA86BnKJtTXzjyUlbw2cf7On1_rsxtyswPFwFwYR_eEzHPGj9-XX82rJW2VjBs9kIazCKhlu3V2twWPAAWe9Z9OklJF2vUtSuvHVlws-w2VHZAwZggNMAMws5NrpP1YjH17l30jTrzgnm3Del7U7aPJjtGzLUfXwhMbxfyvzEjYC9BQfur7yClaw2YLNAENFGOR3W7D-iKNwG75-QlyIy8KHWBAEccWFB-niEuPulu80bPuJ-1khzuazpqW0xe9rTY0IJK7fd-D65PPVx9M4RGCIK5XJNjZTzKTLNFL7poptPZmVJkE0RWprghqJqlJmhJHFBNFNsFbSFZm1aFJZG6N2Ya2ZN_gahCwJy9upM5hmukRLddf06qoJYTxCqRGMe03kVaAn5ygZP3NvpkiXkxpzVmMe1BjB--GLRUfN8YzsNmtgkAuNH8FBr-w8jN27PKE5jH6SqDSCwyGZRh27UooG50uSySQBPyZvek6G6d0Y_0Ww13WkIf--_-3_vVxv4AWXvjtYeQBr7e0S3xL4act3vtf_AnX4-aQ
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwzV1Lb9QwEB5V7QE4UGihpBQwEhw4ZNd52IkPHBCl2u22VVVtpd5CHrOoAmVXbBYEv4W_wn9jxnHCQ9BbJW6J7Dxsj8ffeMbfADzLgyJQGMW-jsLcj4uI9KBhIBdoXTH_WFDw1sDxiR6dx4cX6mINvvVnYRDRBp_hgC-tL7-alyveKhsyeCYLwYVQTvDLZzLQli_H-zSaz8Pw4M309ch3OQT8Mkpl46sZptKkMZIpkERsrci0UAGiyhNd0WIZRGXCnCYyDxFNiFUkTZ5qjSqRleKcEKTgNwhnqLA9Hdb7KDjFrfWlqoSaFpvOCSrNcDo-5agxPSDzSxEAYcph5j2PTfzb-mcTuvwb29o17mATvne904a2vB-smmJQfv2DOPJ_7b47cNuBa_GqnQ13YQ3rLdh0QFs4Nbbcglu_sDBuw2QfcSHOcptEQhCIF6fWDBFn6LfnmGduY1N8uszF4fyybqhs8fPgVi0cTe27e3B-LQ28D-v1vMYHIGRB1oqeGYVJGheoqa9jujVlSCiWcLgHw27ks9IRsHMekA-ZNcSkyUhsMhabzImNBy_6JxYt-cgVdbd5xPt6brA92OuEK3PaaZkFpKXpJUGUePC0Lya9ws6ivMb5iuqkkqAt01NdVYcJ7BjherDTCm7__U7ed__-X0_gxmh6fJQdjU8mD-Emt6QNI92D9ebjCh8R1GuKx3bGCXh73TL6A_wyUvo
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Deep+Ranking+for+Person+Re-Identification+via+Joint+Representation+Learning&rft.jtitle=IEEE+transactions+on+image+processing&rft.au=Chen%2C+Shi-Zhe&rft.au=Guo%2C+Chun-Chao&rft.au=Lai%2C+Jian-Huang&rft.date=2016-05-01&rft.issn=1941-0042&rft.eissn=1941-0042&rft.volume=25&rft.issue=5&rft.spage=2353&rft_id=info:doi/10.1109%2FTIP.2016.2545929&rft.externalDBID=NO_FULL_TEXT
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1057-7149&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1057-7149&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1057-7149&client=summon