Video Person Re-Identification Using Attribute-Enhanced Features
In this work we propose to boost video-based person re-identification (Re-ID) by using attribute-enhanced feature presentation. To this end, we not only try to use the ID-relevant attributes more effectively, but also for the first time in literature harness the ID-irrelevant attributes to help mode...
Saved in:
Published in | IEEE transactions on circuits and systems for video technology Vol. 32; no. 11; pp. 7951 - 7966 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
01.11.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
ISSN | 1051-8215 1558-2205 |
DOI | 10.1109/TCSVT.2022.3189027 |
Cover
Loading…
Abstract | In this work we propose to boost video-based person re-identification (Re-ID) by using attribute-enhanced feature presentation. To this end, we not only try to use the ID-relevant attributes more effectively, but also for the first time in literature harness the ID-irrelevant attributes to help model training. The former mainly include gender, age, clothing characteristics, etc., which contain rich and supplementary information about the pedestrian; the latter include viewpoint, action, etc., which are seldom used for identification previously. In particular, we use the attributes to enhance the significant areas of the image with a novel Attribute Salient Region Enhance (ASRE) module that can attend more accurately to the body of the pedestrian, so as to better separate the target from the background. Furthermore, we find that many ID-irrelevant but subject-relevant factors, like the view angle and movement of the target pedestrian, have great impact on the two-dimensional appearance of a pedestrian. We then propose to exploit both the ID-relevant and the ID-irrelevant attributes via a novel triplet loss called the Viewpoint and Action-Invariant (VAI) triplet loss. Based on the above, we design an Attribute Salience Assisted Network (ASA-Net) to perform attribute recognition along with identity recognition, and use the attributes for feature enhancement and hard sample mining. Extensive experiments on MARS and DukeMTMC-VideoReID datasets show that our method outperforms the state-of-the-arts. Also, the visualizations of learning results further prove the effectiveness of the proposed method. |
---|---|
AbstractList | In this work we propose to boost video-based person re-identification (Re-ID) by using attribute-enhanced feature presentation. To this end, we not only try to use the ID-relevant attributes more effectively, but also for the first time in literature harness the ID-irrelevant attributes to help model training. The former mainly include gender, age, clothing characteristics, etc., which contain rich and supplementary information about the pedestrian; the latter include viewpoint, action, etc., which are seldom used for identification previously. In particular, we use the attributes to enhance the significant areas of the image with a novel Attribute Salient Region Enhance (ASRE) module that can attend more accurately to the body of the pedestrian, so as to better separate the target from the background. Furthermore, we find that many ID-irrelevant but subject-relevant factors, like the view angle and movement of the target pedestrian, have great impact on the two-dimensional appearance of a pedestrian. We then propose to exploit both the ID-relevant and the ID-irrelevant attributes via a novel triplet loss called the Viewpoint and Action-Invariant (VAI) triplet loss. Based on the above, we design an Attribute Salience Assisted Network (ASA-Net) to perform attribute recognition along with identity recognition, and use the attributes for feature enhancement and hard sample mining. Extensive experiments on MARS and DukeMTMC-VideoReID datasets show that our method outperforms the state-of-the-arts. Also, the visualizations of learning results further prove the effectiveness of the proposed method. |
Author | Chen, Zhiyuan Chen, Jiaxin Wang, Yunhong Chai, Tianrui Li, Annan Mei, Xinyu |
Author_xml | – sequence: 1 givenname: Tianrui orcidid: 0000-0003-2557-9496 surname: Chai fullname: Chai, Tianrui email: trchai@buaa.edu.cn organization: ByteDance Inc., Shanghai, China – sequence: 2 givenname: Zhiyuan surname: Chen fullname: Chen, Zhiyuan email: dechen@buaa.edu.cn organization: Alibaba Group, Hangzhou, China – sequence: 3 givenname: Annan orcidid: 0000-0003-3497-5052 surname: Li fullname: Li, Annan email: liannan@buaa.edu.cn organization: State Key Laboratory of Virtual Reality Technology and Systems, School of Computer Science and Engineering, Beihang University, Beijing, China – sequence: 4 givenname: Jiaxin surname: Chen fullname: Chen, Jiaxin email: jiaxinchen@buaa.edu.cn organization: State Key Laboratory of Virtual Reality Technology and Systems, School of Computer Science and Engineering, Beihang University, Beijing, China – sequence: 5 givenname: Xinyu surname: Mei fullname: Mei, Xinyu email: xymei@buaa.edu.cn organization: China Southern Power Grid, Guangzhou, China – sequence: 6 givenname: Yunhong orcidid: 0000-0001-8001-2703 surname: Wang fullname: Wang, Yunhong email: yhwang@buaa.edu.cn organization: State Key Laboratory of Virtual Reality Technology and Systems, School of Computer Science and Engineering, Beihang University, Beijing, China |
BookMark | eNp9kE1PAjEQhhuDiYD-Ab2QeF7sx3bb3iQElIREo8B10-1OtQS72HYP_nsXMB48eJrJ5H1mMs8A9XzjAaFrgseEYHW3mr5uVmOKKR0zIhWm4gz1CecyoxTzXtdjTjJJCb9Agxi3GJNc5qKP7jeuhmb0DCE2fvQC2aIGn5x1RifXTdbR-bfRJKXgqjZBNvPv2huoR3PQqQ0QL9G51bsIVz91iNbz2Wr6mC2fHhbTyTIzVPGUmVrWVY1ZzZTEhhKNmdFMVMRiwrESirGc28IWSoGA3FoGXBrBmSlMVRQVG6Lb0959aD5biKncNm3w3cmSCqoKpnDOu5Q8pUxoYgxgS-PS8ZMUtNuVBJcHX-XRV3nwVf746lD6B90H96HD1__QzQlyAPALKEkEE5J9A68KeAo |
CODEN | ITCTEM |
CitedBy_id | crossref_primary_10_1109_JIOT_2023_3335209 crossref_primary_10_1109_TCSVT_2024_3454366 crossref_primary_10_1109_TCSVT_2021_3137216 crossref_primary_10_1109_TCSVT_2024_3390573 crossref_primary_10_1109_TCSVT_2024_3408645 crossref_primary_10_32604_cmc_2024_054895 crossref_primary_10_1109_TCSVT_2023_3293130 crossref_primary_10_1145_3632624 crossref_primary_10_3934_mbe_2024293 crossref_primary_10_1109_TCSVT_2023_3340428 crossref_primary_10_1109_TMM_2022_3231103 crossref_primary_10_1016_j_neunet_2024_107109 crossref_primary_10_1109_TNNLS_2024_3384446 crossref_primary_10_1109_TCSVT_2023_3250464 crossref_primary_10_1016_j_neucom_2024_128479 crossref_primary_10_1016_j_neucom_2023_127193 crossref_primary_10_1109_TCSVT_2023_3285411 crossref_primary_10_1016_j_engappai_2025_110429 |
Cites_doi | 10.1007/978-3-319-48881-3_2 10.1109/ACCESS.2019.2890836 10.1007/978-3-642-21227-7_9 10.1109/TPAMI.2012.246 10.1109/TCSVT.2021.3128214 10.1109/TCSVT.2021.3076097 10.1109/CVPR.2016.90 10.1109/ICCV.2015.420 10.1109/TMM.2021.3050082 10.1109/ICCV.2017.507 10.1016/j.patcog.2020.107414 10.1109/TIP.2019.2910414 10.1109/CVPR42600.2020.00335 10.1109/CVPR.2019.00730 10.1007/978-3-030-31723-2_18 10.1109/TCSVT.2020.2988034 10.5244/C.26.24 10.1109/ICIP42928.2021.9506348 10.1109/TIP.2021.3120054 10.1109/CVPR.2018.00046 10.1109/CVPR.2009.5206848 10.1109/TMM.2018.2877886 10.1609/aaai.v33i01.33018287 10.1145/3394171.3413979 10.1109/TMM.2020.3042068 10.1109/TCSVT.2014.2352552 10.1109/TCSVT.2022.3169422 10.1109/CVPR.2017.362 10.1109/CVPR.2016.148 10.1109/CVPR46437.2021.00205 10.1109/CVPR46437.2021.00226 10.1109/TIP.2020.3036762 10.1007/978-3-319-46478-7_31 10.1145/2647868.2654966 10.1109/ICCV.2019.00406 10.1109/ICCV48922.2021.00022 10.1007/978-3-030-58595-2_24 10.1109/TIP.2019.2946975 10.1109/ICCV.2019.00065 10.24963/ijcai.2018/441 10.1109/TIFS.2021.3139224 10.1609/aaai.v34i07.6802 10.1109/TCSVT.2020.3033165 10.1109/CVPR.2019.00637 10.1016/j.patcog.2019.06.006 10.1109/TIP.2020.3001693 10.1007/978-3-7908-2604-3_16 10.1109/CVPR.2018.00543 10.1109/CVPR.2017.357 10.1109/TCSVT.2018.2865749 10.1109/CVPRW.2019.00190 10.1109/CVPR.2019.00735 10.1109/TIP.2021.3131937 10.1007/978-3-319-10584-0_1 10.1609/aaai.v33i01.33018786 10.1109/CVPR.2018.00242 10.1007/978-3-319-46466-4_52 10.1109/CVPR42600.2020.00297 10.1109/TMM.2021.3069562 10.1109/CVPR46437.2021.00435 10.1145/3240508.3240550 10.24963/ijcai.2021/179 10.1109/ICCV.2017.550 10.1016/j.neucom.2019.01.027 10.1109/CVPR.2019.00505 10.1109/TCSVT.2022.3157130 10.1007/978-3-030-58598-3_39 10.1109/ICME51207.2021.9428110 10.1007/978-3-030-58536-5_14 10.1109/TIP.2021.3093759 10.1109/TNNLS.2018.2890289 10.1145/3474085.3475566 10.1109/ICCV.2017.349 10.1109/TCSVT.2019.2952550 10.1145/3343031.3351003 10.1109/CVPR.2017.145 10.1145/3394171.3413843 10.1109/CVPR.2017.717 10.1109/TPAMI.2017.2679002 10.1109/CVPR42600.2020.01042 10.1109/ICCV.2017.46 10.1109/TPAMI.2021.3054775 10.1109/CVPR.2017.358 10.1109/CVPR.2015.7299016 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022 |
DBID | 97E RIA RIE AAYXX CITATION 7SC 7SP 8FD JQ2 L7M L~C L~D |
DOI | 10.1109/TCSVT.2022.3189027 |
DatabaseName | IEEE Xplore (IEEE) IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
DatabaseTitle | CrossRef Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional |
DatabaseTitleList | Technology Research Database |
Database_xml | – sequence: 1 dbid: RIE name: IEEE Xplore url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering |
EISSN | 1558-2205 |
EndPage | 7966 |
ExternalDocumentID | 10_1109_TCSVT_2022_3189027 9817378 |
Genre | orig-research |
GrantInformation_xml | – fundername: Key Program of National Natural Science Foundation of China grantid: U20B2069 funderid: 10.13039/501100001809 |
GroupedDBID | -~X 0R~ 29I 4.4 5GY 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACGFS ACIWK AENEX AETIX AGQYO AGSQL AHBIQ AI. AIBXA AKJIK AKQYR ALLEH ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 EBS EJD HZ~ H~9 ICLAB IFIPE IFJZH IPLJI JAVBF LAI M43 O9- OCL P2P RIA RIE RNS RXW TAE TN5 VH1 AAYXX CITATION RIG 7SC 7SP 8FD JQ2 L7M L~C L~D |
ID | FETCH-LOGICAL-c295t-cd8dbd03d3980c21a03ca37b1f01509793345f6f699e7e4ff3e58c753c6cb66b3 |
IEDL.DBID | RIE |
ISSN | 1051-8215 |
IngestDate | Sun Jun 29 16:18:51 EDT 2025 Thu Apr 24 22:58:00 EDT 2025 Tue Jul 01 00:41:18 EDT 2025 Wed Aug 27 02:14:46 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 11 |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c295t-cd8dbd03d3980c21a03ca37b1f01509793345f6f699e7e4ff3e58c753c6cb66b3 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ORCID | 0000-0001-8001-2703 0000-0003-3497-5052 0000-0003-2557-9496 |
PQID | 2729639045 |
PQPubID | 85433 |
PageCount | 16 |
ParticipantIDs | crossref_citationtrail_10_1109_TCSVT_2022_3189027 crossref_primary_10_1109_TCSVT_2022_3189027 proquest_journals_2729639045 ieee_primary_9817378 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2022-11-01 |
PublicationDateYYYYMMDD | 2022-11-01 |
PublicationDate_xml | – month: 11 year: 2022 text: 2022-11-01 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | New York |
PublicationPlace_xml | – name: New York |
PublicationTitle | IEEE transactions on circuits and systems for video technology |
PublicationTitleAbbrev | TCSVT |
PublicationYear | 2022 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | ref13 ref57 ref12 ref56 ref15 ref59 ref14 ref58 ref53 ref52 ref11 ref55 ref10 ref54 ref17 ref16 Hermans (ref5) 2017 ref19 ref18 ref51 ref50 ref91 ref90 ref46 ref45 ref48 ref47 ref42 ref86 ref41 ref85 ref44 ref88 ref43 ref87 ref49 ref8 ref7 ref9 ref4 ref3 ref6 ref82 ref81 ref40 ref84 Vaswani (ref65) 2017 ref83 Van der Maaten (ref92) 2008; 9 ref80 ref35 ref34 ref78 ref37 ref36 ref31 Zhu (ref66); 34 ref75 ref30 ref74 ref33 ref77 ref32 ref76 Xu (ref70) ref2 ref1 ref39 ref38 ref71 ref73 ref72 Liu (ref28) 2019 Liu (ref79) 2020 ref24 ref68 ref23 ref67 ref26 ref25 ref69 ref20 ref64 ref63 ref22 ref21 Kingma (ref89) 2014 ref27 ref29 ref60 ref62 ref61 |
References_xml | – ident: ref33 doi: 10.1007/978-3-319-48881-3_2 – ident: ref20 doi: 10.1109/ACCESS.2019.2890836 – year: 2019 ident: ref28 article-title: Spatially and temporally efficient non-local attention network for video-based person re-identification publication-title: arXiv:1908.01683 – ident: ref2 doi: 10.1007/978-3-642-21227-7_9 – ident: ref3 doi: 10.1109/TPAMI.2012.246 – ident: ref38 doi: 10.1109/TCSVT.2021.3128214 – volume: 9 start-page: 2579 issue: 11 year: 2008 ident: ref92 article-title: Visualizing data using t-SNE publication-title: J. Mach. Learn. Res. – ident: ref10 doi: 10.1109/TCSVT.2021.3076097 – ident: ref34 doi: 10.1109/CVPR.2016.90 – ident: ref41 doi: 10.1109/ICCV.2015.420 – ident: ref30 doi: 10.1109/TMM.2021.3050082 – volume: 34 start-page: 13114 issue: 7 volume-title: Proc. AAAI ident: ref66 article-title: Aware loss with angular regularization for person re-identification – ident: ref48 doi: 10.1109/ICCV.2017.507 – ident: ref62 doi: 10.1016/j.patcog.2020.107414 – ident: ref67 doi: 10.1109/TIP.2019.2910414 – ident: ref9 doi: 10.1109/CVPR42600.2020.00335 – ident: ref19 doi: 10.1109/CVPR.2019.00730 – ident: ref31 doi: 10.1007/978-3-030-31723-2_18 – ident: ref75 doi: 10.1109/TCSVT.2020.2988034 – ident: ref58 doi: 10.5244/C.26.24 – ident: ref24 doi: 10.1109/ICIP42928.2021.9506348 – ident: ref44 doi: 10.1109/TIP.2021.3120054 – ident: ref26 doi: 10.1109/CVPR.2018.00046 – ident: ref88 doi: 10.1109/CVPR.2009.5206848 – ident: ref74 doi: 10.1109/TMM.2018.2877886 – ident: ref27 doi: 10.1609/aaai.v33i01.33018287 – ident: ref63 doi: 10.1145/3394171.3413979 – ident: ref14 doi: 10.1109/TMM.2020.3042068 – ident: ref59 doi: 10.1109/TCSVT.2014.2352552 – ident: ref45 doi: 10.1109/TCSVT.2022.3169422 – ident: ref37 doi: 10.1109/CVPR.2017.362 – ident: ref46 doi: 10.1109/CVPR.2016.148 – ident: ref85 doi: 10.1109/CVPR46437.2021.00205 – ident: ref81 doi: 10.1109/CVPR46437.2021.00226 – ident: ref18 doi: 10.1109/TIP.2020.3036762 – ident: ref72 doi: 10.1007/978-3-319-46478-7_31 – ident: ref54 doi: 10.1145/2647868.2654966 – ident: ref77 doi: 10.1109/ICCV.2019.00406 – ident: ref82 doi: 10.1109/ICCV48922.2021.00022 – ident: ref84 doi: 10.1007/978-3-030-58595-2_24 – ident: ref61 doi: 10.1109/TIP.2019.2946975 – ident: ref23 doi: 10.1109/ICCV.2019.00065 – ident: ref56 doi: 10.24963/ijcai.2018/441 – ident: ref68 doi: 10.1109/TIFS.2021.3139224 – year: 2014 ident: ref89 article-title: Adam: A method for stochastic optimization publication-title: arXiv:1412.6980 – ident: ref22 doi: 10.1609/aaai.v34i07.6802 – ident: ref25 doi: 10.1109/TCSVT.2020.3033165 – ident: ref69 doi: 10.1109/CVPR.2019.00637 – ident: ref15 doi: 10.1016/j.patcog.2019.06.006 – year: 2017 ident: ref65 article-title: Attention is all you need publication-title: arXiv:1706.03762 – ident: ref87 doi: 10.1109/TIP.2020.3001693 – ident: ref90 doi: 10.1007/978-3-7908-2604-3_16 – ident: ref86 doi: 10.1109/CVPR.2018.00543 – ident: ref4 doi: 10.1109/CVPR.2017.357 – ident: ref11 doi: 10.1109/TCSVT.2018.2865749 – ident: ref73 doi: 10.1109/CVPRW.2019.00190 – ident: ref76 doi: 10.1109/CVPR.2019.00735 – ident: ref91 doi: 10.1109/TIP.2021.3131937 – ident: ref40 doi: 10.1007/978-3-319-10584-0_1 – ident: ref49 doi: 10.1609/aaai.v33i01.33018786 – start-page: 11525 volume-title: Proc. ICML ident: ref70 article-title: Dash: Semi-supervised learning with dynamic thresholding – ident: ref13 doi: 10.1109/CVPR.2018.00242 – ident: ref32 doi: 10.1007/978-3-319-46466-4_52 – ident: ref50 doi: 10.1109/CVPR42600.2020.00297 – year: 2017 ident: ref5 article-title: In defense of the triplet loss for person re-identification publication-title: arXiv:1703.07737 – ident: ref42 doi: 10.1109/TMM.2021.3069562 – ident: ref83 doi: 10.1109/CVPR46437.2021.00435 – ident: ref17 doi: 10.1145/3240508.3240550 – ident: ref39 doi: 10.24963/ijcai.2021/179 – ident: ref53 doi: 10.1109/ICCV.2017.550 – ident: ref16 doi: 10.1016/j.neucom.2019.01.027 – ident: ref21 doi: 10.1109/CVPR.2019.00505 – ident: ref52 doi: 10.1109/TCSVT.2022.3157130 – ident: ref8 doi: 10.1007/978-3-030-58598-3_39 – ident: ref64 doi: 10.1109/ICME51207.2021.9428110 – ident: ref7 doi: 10.1007/978-3-030-58536-5_14 – year: 2020 ident: ref79 article-title: Temporal attribute-appearance learning network for video-based person re-identification publication-title: arXiv:2009.04181 – ident: ref78 doi: 10.1109/TIP.2021.3093759 – ident: ref43 doi: 10.1109/TNNLS.2018.2890289 – ident: ref80 doi: 10.1145/3474085.3475566 – ident: ref35 doi: 10.1109/ICCV.2017.349 – ident: ref60 doi: 10.1109/TCSVT.2019.2952550 – ident: ref57 doi: 10.1145/3343031.3351003 – ident: ref6 doi: 10.1109/CVPR.2017.145 – ident: ref29 doi: 10.1145/3394171.3413843 – ident: ref47 doi: 10.1109/CVPR.2017.717 – ident: ref12 doi: 10.1109/TPAMI.2017.2679002 – ident: ref51 doi: 10.1109/CVPR42600.2020.01042 – ident: ref55 doi: 10.1109/ICCV.2017.46 – ident: ref71 doi: 10.1109/TPAMI.2021.3054775 – ident: ref1 doi: 10.1109/CVPR.2017.358 – ident: ref36 doi: 10.1109/CVPR.2015.7299016 |
SSID | ssj0014847 |
Score | 2.5146313 |
Snippet | In this work we propose to boost video-based person re-identification (Re-ID) by using attribute-enhanced feature presentation. To this end, we not only try to... |
SourceID | proquest crossref ieee |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 7951 |
SubjectTerms | Annotations attribute salient region enhance Feature extraction Footwear Hair Image color analysis Image enhancement Measurement pedestrian attribute Video-based person Re-ID viewpoint and action-invariant triplet loss Visualization |
Title | Video Person Re-Identification Using Attribute-Enhanced Features |
URI | https://ieeexplore.ieee.org/document/9817378 https://www.proquest.com/docview/2729639045 |
Volume | 32 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV07T8MwED61nWDgVRCFgjKwQdokThx7o6qKKqQiBG3VLYpfAoFSBMnCr8d2HuIlxJbBtqw7X-777HsAnDGhGA5p4KZBytyQaZ7CJNHmLlMpeKQiPzb5zrMbPF2E16to1YKLJhdGSmmDz-TAfNq3fLHmhbkqG1LixygmbWhr4lbmajUvBiGxzcQ0XPBdov1YnSDj0eF8fL-cayoYBJqhmne1-IsTsl1VfvyKrX-52oZZvbMyrORpUORswN-_FW3879Z3YKsCms6oPBm70JLZHmx-Kj_Yhcvlo5Br59aCbudOumXWrqqu8RwbTuCM8rIplnQn2YONF3AMbiw0T9-HxdVkPp66VUcFlwc0yl0uiGDCQwJR4vHATz3EUxQzX5mLD6ptFYWRwgpTKmMZKoVkRLhmNBxzhjFDB9DJ1pk8BId6IuVMYcFS3zhCzaSjUAmkCZiKwxT1wK9FnPCq3LjpevGcWNrh0cSqJTFqSSq19OC8mfNSFtv4c3TXyLkZWYm4B_1ak0llj29JoDmExmIavx79PusYNszaZZZhHzr5ayFPNNzI2ak9Zx-D3NDn |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3JTsMwEB2VcgAObAVRKJADN0ibxNl8o6paFWgrBGnVWxRvAoFaBMmFr8d2FrEJccvBlq0ZT-Y9exaAM8IE8V3smImTENMlkqcQHkpz5wln1BOeHah85_HEH07d67k3r8FFlQvDOdfBZ7ytPvVbPlvSTF2VdXBoBygIV2BV-n3PzrO1qjcDN9TtxCRgsM1QerIyRcbCnah3P4skGXQcyVHVy1rwxQ3pvio_fsbawwy2YFzuLQ8seWpnKWnT929lG_-7-W3YLKCm0c3Pxg7U-GIXNj4VIGzA5eyR8aVxq2G3ccfNPG9XFBd5hg4oMLpp3haLm_3Fg44YMBRyzCRT34PpoB_1hmbRU8GkDvZSk7KQEWYhhnBoUcdOLEQTFBBbqKsPLK0VuZ7whY8xD7grBOJeSCWnoT4lvk_QPtQXywU_AANbLKFE-IwktnKFkkt7rmBIUjARuAlqgl2KOKZFwXHV9-I51sTDwrFWS6zUEhdqacJ5NeclL7fx5-iGknM1shBxE1qlJuPCIt9iR7IIicYkgj38fdYprA2j8SgeXU1ujmBdrZPnHLagnr5m_FiCj5Sc6DP3AZdz1DA |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Video+Person+Re-Identification+Using+Attribute-Enhanced+Features&rft.jtitle=IEEE+transactions+on+circuits+and+systems+for+video+technology&rft.au=Chai%2C+Tianrui&rft.au=Chen%2C+Zhiyuan&rft.au=Li%2C+Annan&rft.au=Chen%2C+Jiaxin&rft.date=2022-11-01&rft.pub=IEEE&rft.issn=1051-8215&rft.volume=32&rft.issue=11&rft.spage=7951&rft.epage=7966&rft_id=info:doi/10.1109%2FTCSVT.2022.3189027&rft.externalDocID=9817378 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1051-8215&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1051-8215&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1051-8215&client=summon |