Graph Sampling-Based Multi-Stream Enhancement Network for Visible-Infrared Person Re-Identification
With the increasing demand for person re-identification (Re-ID) tasks, the need for all-day retrieval has become an inevitable trend. Nevertheless, single-modal Re-ID is no longer sufficient to meet this requirement, making Multi-Modal Data crucial in Re-ID. Consequently, a Visible-Infrared Person R...
Saved in:
Published in | Sensors (Basel, Switzerland) Vol. 23; no. 18; p. 7948 |
---|---|
Main Authors | , , , , , , |
Format | Journal Article |
Language | English |
Published |
Basel
MDPI AG
01.09.2023
MDPI |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | With the increasing demand for person re-identification (Re-ID) tasks, the need for all-day retrieval has become an inevitable trend. Nevertheless, single-modal Re-ID is no longer sufficient to meet this requirement, making Multi-Modal Data crucial in Re-ID. Consequently, a Visible-Infrared Person Re-Identification (VI Re-ID) task is proposed, which aims to match pairs of person images from the visible and infrared modalities. The significant modality discrepancy between the modalities poses a major challenge. Existing VI Re-ID methods focus on cross-modal feature learning and modal transformation to alleviate the discrepancy but overlook the impact of person contour information. Contours exhibit modality invariance, which is vital for learning effective identity representations and cross-modal matching. In addition, due to the low intra-modal diversity in the visible modality, it is difficult to distinguish the boundaries between some hard samples. To address these issues, we propose the Graph Sampling-based Multi-stream Enhancement Network (GSMEN). Firstly, the Contour Expansion Module (CEM) incorporates the contour information of a person into the original samples, further reducing the modality discrepancy and leading to improved matching stability between image pairs of different modalities. Additionally, to better distinguish cross-modal hard sample pairs during the training process, an innovative Cross-modality Graph Sampler (CGS) is designed for sample selection before training. The CGS calculates the feature distance between samples from different modalities and groups similar samples into the same batch during the training process, effectively exploring the boundary relationships between hard classes in the cross-modal setting. Some experiments conducted on the SYSU-MM01 and RegDB datasets demonstrate the superiority of our proposed method. Specifically, in the VIS→IR task, the experimental results on the RegDB dataset achieve 93.69% for Rank-1 and 92.56% for mAP. |
---|---|
AbstractList | With the increasing demand for person re-identification (Re-ID) tasks, the need for all-day retrieval has become an inevitable trend. Nevertheless, single-modal Re-ID is no longer sufficient to meet this requirement, making Multi-Modal Data crucial in Re-ID. Consequently, a Visible-Infrared Person Re-Identification (VI Re-ID) task is proposed, which aims to match pairs of person images from the visible and infrared modalities. The significant modality discrepancy between the modalities poses a major challenge. Existing VI Re-ID methods focus on cross-modal feature learning and modal transformation to alleviate the discrepancy but overlook the impact of person contour information. Contours exhibit modality invariance, which is vital for learning effective identity representations and cross-modal matching. In addition, due to the low intra-modal diversity in the visible modality, it is difficult to distinguish the boundaries between some hard samples. To address these issues, we propose the Graph Sampling-based Multi-stream Enhancement Network (GSMEN). Firstly, the Contour Expansion Module (CEM) incorporates the contour information of a person into the original samples, further reducing the modality discrepancy and leading to improved matching stability between image pairs of different modalities. Additionally, to better distinguish cross-modal hard sample pairs during the training process, an innovative Cross-modality Graph Sampler (CGS) is designed for sample selection before training. The CGS calculates the feature distance between samples from different modalities and groups similar samples into the same batch during the training process, effectively exploring the boundary relationships between hard classes in the cross-modal setting. Some experiments conducted on the SYSU-MM01 and RegDB datasets demonstrate the superiority of our proposed method. Specifically, in the VIS→IR task, the experimental results on the RegDB dataset achieve 93.69% for Rank-1 and 92.56% for mAP. With the increasing demand for person re-identification (Re-ID) tasks, the need for all-day retrieval has become an inevitable trend. Nevertheless, single-modal Re-ID is no longer sufficient to meet this requirement, making Multi-Modal Data crucial in Re-ID. Consequently, a Visible-Infrared Person Re-Identification (VI Re-ID) task is proposed, which aims to match pairs of person images from the visible and infrared modalities. The significant modality discrepancy between the modalities poses a major challenge. Existing VI Re-ID methods focus on cross-modal feature learning and modal transformation to alleviate the discrepancy but overlook the impact of person contour information. Contours exhibit modality invariance, which is vital for learning effective identity representations and cross-modal matching. In addition, due to the low intra-modal diversity in the visible modality, it is difficult to distinguish the boundaries between some hard samples. To address these issues, we propose the Graph Sampling-based Multi-stream Enhancement Network (GSMEN). Firstly, the Contour Expansion Module (CEM) incorporates the contour information of a person into the original samples, further reducing the modality discrepancy and leading to improved matching stability between image pairs of different modalities. Additionally, to better distinguish cross-modal hard sample pairs during the training process, an innovative Cross-modality Graph Sampler (CGS) is designed for sample selection before training. The CGS calculates the feature distance between samples from different modalities and groups similar samples into the same batch during the training process, effectively exploring the boundary relationships between hard classes in the cross-modal setting. Some experiments conducted on the SYSU-MM01 and RegDB datasets demonstrate the superiority of our proposed method. Specifically, in the VIS→IR task, the experimental results on the RegDB dataset achieve 93.69% for Rank-1 and 92.56% for mAP.With the increasing demand for person re-identification (Re-ID) tasks, the need for all-day retrieval has become an inevitable trend. Nevertheless, single-modal Re-ID is no longer sufficient to meet this requirement, making Multi-Modal Data crucial in Re-ID. Consequently, a Visible-Infrared Person Re-Identification (VI Re-ID) task is proposed, which aims to match pairs of person images from the visible and infrared modalities. The significant modality discrepancy between the modalities poses a major challenge. Existing VI Re-ID methods focus on cross-modal feature learning and modal transformation to alleviate the discrepancy but overlook the impact of person contour information. Contours exhibit modality invariance, which is vital for learning effective identity representations and cross-modal matching. In addition, due to the low intra-modal diversity in the visible modality, it is difficult to distinguish the boundaries between some hard samples. To address these issues, we propose the Graph Sampling-based Multi-stream Enhancement Network (GSMEN). Firstly, the Contour Expansion Module (CEM) incorporates the contour information of a person into the original samples, further reducing the modality discrepancy and leading to improved matching stability between image pairs of different modalities. Additionally, to better distinguish cross-modal hard sample pairs during the training process, an innovative Cross-modality Graph Sampler (CGS) is designed for sample selection before training. The CGS calculates the feature distance between samples from different modalities and groups similar samples into the same batch during the training process, effectively exploring the boundary relationships between hard classes in the cross-modal setting. Some experiments conducted on the SYSU-MM01 and RegDB datasets demonstrate the superiority of our proposed method. Specifically, in the VIS→IR task, the experimental results on the RegDB dataset achieve 93.69% for Rank-1 and 92.56% for mAP. |
Audience | Academic |
Author | Xiang, Sen Xiao, Junjie Ran, Ruisheng Wang, Renlin Li, Tiansong Zhang, Wenfeng Jiang, Jinhua |
AuthorAffiliation | 1 College of Computer and Information Science, Chongqing Normal University, Chongqing 401331, China; 2021210516040@stu.cqnu.edu.cn (J.J.); 2022210516103@stu.cqnu.edu.cn (J.X.); tiansongli@cqnu.edu.cn (T.L.) 3 School of Information Science and Engineering, Wuhan University of Science and Technology, Wuhan 430081, China; xiangsen@wust.edu.cn 2 School of Computer Engineering, Weifang University, Weifang 261061, China; wfuwrl@126.com |
AuthorAffiliation_xml | – name: 2 School of Computer Engineering, Weifang University, Weifang 261061, China; wfuwrl@126.com – name: 3 School of Information Science and Engineering, Wuhan University of Science and Technology, Wuhan 430081, China; xiangsen@wust.edu.cn – name: 1 College of Computer and Information Science, Chongqing Normal University, Chongqing 401331, China; 2021210516040@stu.cqnu.edu.cn (J.J.); 2022210516103@stu.cqnu.edu.cn (J.X.); tiansongli@cqnu.edu.cn (T.L.) |
Author_xml | – sequence: 1 givenname: Jinhua orcidid: 0009-0006-4215-5176 surname: Jiang fullname: Jiang, Jinhua – sequence: 2 givenname: Junjie orcidid: 0009-0003-2229-103X surname: Xiao fullname: Xiao, Junjie – sequence: 3 givenname: Renlin surname: Wang fullname: Wang, Renlin – sequence: 4 givenname: Tiansong surname: Li fullname: Li, Tiansong – sequence: 5 givenname: Wenfeng surname: Zhang fullname: Zhang, Wenfeng – sequence: 6 givenname: Ruisheng orcidid: 0000-0002-0785-2703 surname: Ran fullname: Ran, Ruisheng – sequence: 7 givenname: Sen surname: Xiang fullname: Xiang, Sen |
BookMark | eNpdUktv1DAYjFARfcCBfxCJCxxS_H6cUKnaslJ5iAJXy3E-73pJ7MVOqPj3GLaqKPLB1nhm7LHnuDmIKULTPMfolFKNXhdCsZKaqUfNEWaEdYoQdPDP-rA5LmWLEKGUqifNIZVSCIT4UeOust1t2hs77cYQ191bW2Bo3y_jHLqbOYOd2ou4sdHBBHFuP8B8m_L31qfcfgsl9CN0q-izzVX1CXJJsf1coaGSgw_OziHFp81jb8cCz-7mk-br5cWX83fd9cer1fnZdecYE3MHQ48F9wz1TDjlMEFOccK8kkRq7LgWTlPltKVEACiOec8lYpzVWF5YTU-a1d53SHZrdjlMNv8yyQbzF0h5bWyegxvBaKlFjykwIS0j2qu-Zw5766XCWipXvd7svXZLP8Hgap5sxwemD3di2Jh1-mkw4pQpJqrDyzuHnH4sUGYzheJgHG2EtBRDlESYMa5Jpb74j7pNS471rSpLaEHrV8nKOt2z1rYmCNGnerCrY4ApuNoIHyp-JiVWmHDGq-DVXuByKiWDv78-RuZPccx9cehv_0yz3Q |
Cites_doi | 10.1109/CVPR42600.2020.00321 10.1609/aaai.v34i04.5891 10.1109/CVPR42600.2020.01339 10.1109/LSP.2020.2994815 10.1109/CVPR.2015.7298794 10.1109/ICCV48922.2021.01161 10.1109/ICCV48922.2021.01438 10.3390/s22166293 10.24963/ijcai.2018/94 10.1145/3503161.3548336 10.1109/LSP.2021.3115040 10.3390/s17030605 10.1609/aaai.v36i1.19987 10.1007/s11263-019-01290-1 10.1609/aaai.v34i07.6894 10.1109/TNNLS.2021.3105702 10.1109/JSTSP.2022.3233716 10.1007/978-3-031-19781-9_28 10.1109/CVPR.2017.389 10.1109/TPAMI.1986.4767851 10.1109/CVPR46437.2021.00431 10.1109/CVPR46437.2021.00343 10.1609/aaai.v35i4.16466 10.1109/TMM.2020.3042080 10.1109/ICCV48922.2021.00029 10.1109/CVPR.2019.00029 10.1016/j.inffus.2022.09.019 10.1109/TPAMI.2021.3054775 10.1109/ICCV.2017.575 10.3390/s23031426 10.1109/TIFS.2020.3001665 10.1109/ICCV48922.2021.01331 10.1109/CVPR46437.2021.00621 10.1109/ICCV.2019.00372 10.1109/TNNLS.2021.3085978 10.1016/j.inffus.2016.03.003 10.3390/s21175839 10.1109/CVPR52729.2023.00214 10.1109/ICCV.2015.133 10.1109/ICCV48922.2021.01183 10.1109/TPAMI.2020.3048039 10.1007/978-3-030-58520-4_14 10.1109/CVPR.2016.90 10.1109/LSP.2021.3091924 10.1609/aaai.v37i2.25273 10.1145/3474085.3475250 |
ContentType | Journal Article |
Copyright | COPYRIGHT 2023 MDPI AG 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. 2023 by the authors. 2023 |
Copyright_xml | – notice: COPYRIGHT 2023 MDPI AG – notice: 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. – notice: 2023 by the authors. 2023 |
DBID | AAYXX CITATION 3V. 7X7 7XB 88E 8FI 8FJ 8FK ABUWG AFKRA AZQEC BENPR CCPQU DWQXO FYUFA GHDGH K9. M0S M1P PHGZM PHGZT PIMPY PJZUB PKEHL PPXIY PQEST PQQKQ PQUKI PRINS 7X8 5PM DOA |
DOI | 10.3390/s23187948 |
DatabaseName | CrossRef ProQuest Central (Corporate) Health & Medical Collection ProQuest Central (purchase pre-March 2016) Medical Database (Alumni Edition) Hospital Premium Collection Hospital Premium Collection (Alumni Edition) ProQuest Central (Alumni) (purchase pre-March 2016) ProQuest Central (Alumni) ProQuest Central UK/Ireland ProQuest Central Essentials ProQuest Central ProQuest One Community College ProQuest Central Korea Health Research Premium Collection Health Research Premium Collection (Alumni) ProQuest Health & Medical Complete (Alumni) Health & Medical Collection (Alumni) Medical Database ProQuest Central Premium ProQuest One Academic Publicly Available Content Database (subscription) ProQuest Health & Medical Research Collection ProQuest One Academic Middle East (New) ProQuest One Health & Nursing ProQuest One Academic Eastern Edition (DO NOT USE) ProQuest One Academic ProQuest One Academic UKI Edition ProQuest Central China MEDLINE - Academic PubMed Central (Full Participant titles) DOAJ Directory of Open Access Journals |
DatabaseTitle | CrossRef Publicly Available Content Database ProQuest One Academic Middle East (New) ProQuest Central Essentials ProQuest Health & Medical Complete (Alumni) ProQuest Central (Alumni Edition) ProQuest One Community College ProQuest One Health & Nursing ProQuest Central China ProQuest Central Health Research Premium Collection Health and Medicine Complete (Alumni Edition) ProQuest Central Korea Health & Medical Research Collection ProQuest Central (New) ProQuest Medical Library (Alumni) ProQuest One Academic Eastern Edition ProQuest Hospital Collection Health Research Premium Collection (Alumni) ProQuest Hospital Collection (Alumni) ProQuest Health & Medical Complete ProQuest Medical Library ProQuest One Academic UKI Edition ProQuest One Academic ProQuest One Academic (New) ProQuest Central (Alumni) MEDLINE - Academic |
DatabaseTitleList | CrossRef MEDLINE - Academic Publicly Available Content Database |
Database_xml | – sequence: 1 dbid: DOA name: DOAJ Directory of Open Access Journals url: https://www.doaj.org/ sourceTypes: Open Website – sequence: 2 dbid: BENPR name: AUTh Library subscriptions: ProQuest Central url: https://www.proquest.com/central sourceTypes: Aggregation Database |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering |
EISSN | 1424-8220 |
ExternalDocumentID | oai_doaj_org_article_9796b13e467a429f8bb4c1faf781978c PMC10534846 A771812545 10_3390_s23187948 |
GrantInformation_xml | – fundername: Natural Science Foundation of Chongqing grantid: 2023NSCQ-MSX1645 – fundername: Key Project for Science and Technology Research Program of Chongqing Municipal Education Commission grantid: KJZD-K202100505 – fundername: Chongqing Normal University Foundation grantid: 21XLB026 – fundername: Science and Technology Research Program of Chongqing Municipal Education Commission grantid: KJQN202200551 – fundername: Chongqing Technology Innovation and Application Development Project grantid: cstc2020jscx-msxmX0190 |
GroupedDBID | --- 123 2WC 53G 5VS 7X7 88E 8FE 8FG 8FI 8FJ AADQD AAHBH AAYXX ABDBF ABUWG ACUHS ADBBV ADMLS AENEX AFKRA AFZYC ALIPV ALMA_UNASSIGNED_HOLDINGS BENPR BPHCQ BVXVI CCPQU CITATION CS3 D1I DU5 E3Z EBD ESX F5P FYUFA GROUPED_DOAJ GX1 HH5 HMCUK HYE IAO ITC KQ8 L6V M1P M48 MODMG M~E OK1 OVT P2P P62 PHGZM PHGZT PIMPY PQQKQ PROAC PSQYO RNS RPM TUS UKHRP XSB ~8M PMFND 3V. 7XB 8FK AZQEC DWQXO K9. PJZUB PKEHL PPXIY PQEST PQUKI PRINS 7X8 5PM PUEGO |
ID | FETCH-LOGICAL-c446t-edb165f40b46c8c120c8524f872791c596c938c9a326ee8515b570454023f6a93 |
IEDL.DBID | M48 |
ISSN | 1424-8220 |
IngestDate | Wed Aug 27 01:32:33 EDT 2025 Thu Aug 21 18:36:18 EDT 2025 Tue Aug 05 09:12:24 EDT 2025 Fri Jul 25 07:06:05 EDT 2025 Tue Jun 10 21:17:46 EDT 2025 Tue Jul 01 03:50:29 EDT 2025 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 18 |
Language | English |
License | https://creativecommons.org/licenses/by/4.0 Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c446t-edb165f40b46c8c120c8524f872791c596c938c9a326ee8515b570454023f6a93 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 These authors contributed equally to this work. |
ORCID | 0009-0006-4215-5176 0009-0003-2229-103X 0000-0002-0785-2703 |
OpenAccessLink | http://journals.scholarsportal.info/openUrl.xqy?doi=10.3390/s23187948 |
PMID | 37766005 |
PQID | 2869630057 |
PQPubID | 2032333 |
ParticipantIDs | doaj_primary_oai_doaj_org_article_9796b13e467a429f8bb4c1faf781978c pubmedcentral_primary_oai_pubmedcentral_nih_gov_10534846 proquest_miscellaneous_2870144592 proquest_journals_2869630057 gale_infotracacademiconefile_A771812545 crossref_primary_10_3390_s23187948 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2023-09-01 |
PublicationDateYYYYMMDD | 2023-09-01 |
PublicationDate_xml | – month: 09 year: 2023 text: 2023-09-01 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | Basel |
PublicationPlace_xml | – name: Basel |
PublicationTitle | Sensors (Basel, Switzerland) |
PublicationYear | 2023 |
Publisher | MDPI AG MDPI |
Publisher_xml | – name: MDPI AG – name: MDPI |
References | ref_13 ref_12 ref_11 ref_10 ref_52 ref_19 ref_17 ref_16 ref_15 Goodfellow (ref_14) 2014; 27 Liu (ref_39) 2020; 23 ref_25 Shu (ref_5) 2021; 28 ref_24 Canny (ref_53) 1986; 8 ref_23 ref_22 ref_21 Karim (ref_32) 2023; 90 Ghassemian (ref_31) 2016; 32 ref_28 ref_27 ref_26 Liu (ref_51) 2023; 17 Zhang (ref_7) 2020; 27 Wu (ref_20) 2020; 128 ref_36 ref_35 ref_34 ref_33 Liu (ref_50) 2023; 34 ref_30 Kong (ref_6) 2021; 28 ref_38 ref_37 Ye (ref_4) 2021; 44 Ye (ref_18) 2021; 16 ref_47 ref_46 ref_45 ref_44 ref_43 ref_42 ref_41 Li (ref_29) 2020; 44 ref_40 ref_1 ref_3 ref_2 ref_49 ref_48 ref_9 ref_8 |
References_xml | – ident: ref_35 doi: 10.1109/CVPR42600.2020.00321 – ident: ref_17 doi: 10.1609/aaai.v34i04.5891 – ident: ref_45 doi: 10.1109/CVPR42600.2020.01339 – volume: 27 start-page: 850 year: 2020 ident: ref_7 article-title: AsNet: Asymmetrical network for learning rich features in person re-identification publication-title: IEEE Signal Process Lett. doi: 10.1109/LSP.2020.2994815 – ident: ref_42 doi: 10.1109/CVPR.2015.7298794 – ident: ref_19 doi: 10.1109/ICCV48922.2021.01161 – ident: ref_33 doi: 10.1109/ICCV48922.2021.01438 – ident: ref_2 doi: 10.3390/s22166293 – ident: ref_13 doi: 10.24963/ijcai.2018/94 – ident: ref_24 doi: 10.1145/3503161.3548336 – volume: 28 start-page: 2003 year: 2021 ident: ref_6 article-title: Dynamic center aggregation loss with mixed modality for visible-infrared person re-identification publication-title: IEEE Signal Process Lett. doi: 10.1109/LSP.2021.3115040 – ident: ref_41 doi: 10.3390/s17030605 – ident: ref_48 doi: 10.1609/aaai.v36i1.19987 – volume: 128 start-page: 1765 year: 2020 ident: ref_20 article-title: RGB-IR person re-identification by cross-modality similarity preservation publication-title: Int. J. Comput. Vis. doi: 10.1007/s11263-019-01290-1 – ident: ref_23 doi: 10.1609/aaai.v34i07.6894 – volume: 34 start-page: 1958 year: 2023 ident: ref_50 article-title: SFANet: A Spectrum-Aware Feature Augmentation Network for Visible-Infrared Person ReIdentification publication-title: IEEE Trans. Neural Netw. Learn. Sys. doi: 10.1109/TNNLS.2021.3105702 – volume: 17 start-page: 545 year: 2023 ident: ref_51 article-title: Towards homogeneous modality learning and multi-granularity information exploration for visible-infrared person re-identification publication-title: IEEE J. Sel. Top. Signal Process doi: 10.1109/JSTSP.2022.3233716 – ident: ref_16 doi: 10.1007/978-3-031-19781-9_28 – ident: ref_52 doi: 10.1109/CVPR.2017.389 – volume: 8 start-page: 679 year: 1986 ident: ref_53 article-title: A computational approach to edge detection publication-title: IEEE Trans. Pattern Anal. Mach. Intell. doi: 10.1109/TPAMI.1986.4767851 – ident: ref_12 doi: 10.1109/CVPR46437.2021.00431 – ident: ref_38 doi: 10.1109/CVPR46437.2021.00343 – ident: ref_10 doi: 10.1609/aaai.v35i4.16466 – volume: 23 start-page: 4414 year: 2020 ident: ref_39 article-title: Parameter sharing exploration and hetero-center triplet loss for visible-thermal person re-identification publication-title: IEEE Trans. Multimed. doi: 10.1109/TMM.2020.3042080 – ident: ref_26 doi: 10.1109/ICCV48922.2021.00029 – ident: ref_30 doi: 10.1109/CVPR.2019.00029 – volume: 90 start-page: 185 year: 2023 ident: ref_32 article-title: Current advances and future perspectives of image fusion: A comprehensive review publication-title: Inf. Fusion doi: 10.1016/j.inffus.2022.09.019 – volume: 44 start-page: 2872 year: 2021 ident: ref_4 article-title: Deep learning for person re-identification: A survey and outlook publication-title: IEEE Trans. Pattern Anal. Mach. Intell. doi: 10.1109/TPAMI.2021.3054775 – ident: ref_28 – ident: ref_8 doi: 10.1109/ICCV.2017.575 – ident: ref_3 doi: 10.3390/s23031426 – volume: 27 start-page: 53 year: 2014 ident: ref_14 article-title: Generative adversarial nets publication-title: Adv. Neural Inf. Process Syst. – volume: 16 start-page: 728 year: 2021 ident: ref_18 article-title: Visible-infrared person re-identification via homogeneous augmented tri-modal learning publication-title: IEEE Trans. Inf. Foren. Sec. doi: 10.1109/TIFS.2020.3001665 – ident: ref_34 – ident: ref_9 doi: 10.1109/ICCV48922.2021.01331 – ident: ref_37 doi: 10.1109/CVPR46437.2021.00621 – ident: ref_49 doi: 10.1109/ICCV.2019.00372 – ident: ref_40 – ident: ref_22 doi: 10.1109/TNNLS.2021.3085978 – volume: 32 start-page: 75 year: 2016 ident: ref_31 article-title: A review of remote sensing image fusion methods publication-title: Inf. Fusion doi: 10.1016/j.inffus.2016.03.003 – ident: ref_1 doi: 10.3390/s21175839 – ident: ref_44 – ident: ref_11 doi: 10.1109/CVPR52729.2023.00214 – ident: ref_43 doi: 10.1109/ICCV.2015.133 – ident: ref_46 doi: 10.1109/ICCV48922.2021.01183 – volume: 44 start-page: 3260 year: 2020 ident: ref_29 article-title: Self-correction for human parsing publication-title: IEEE Trans. Pattern Anal. Mach. Intell. doi: 10.1109/TPAMI.2020.3048039 – ident: ref_21 doi: 10.1007/978-3-030-58520-4_14 – ident: ref_15 – ident: ref_27 doi: 10.1109/CVPR.2016.90 – ident: ref_36 – volume: 28 start-page: 1365 year: 2021 ident: ref_5 article-title: Semantic-guided pixel sampling for cloth-changing person re-identification publication-title: IEEE Signal Process Lett. doi: 10.1109/LSP.2021.3091924 – ident: ref_25 doi: 10.1609/aaai.v37i2.25273 – ident: ref_47 doi: 10.1145/3474085.3475250 |
SSID | ssj0023338 |
Score | 2.4028208 |
Snippet | With the increasing demand for person re-identification (Re-ID) tasks, the need for all-day retrieval has become an inevitable trend. Nevertheless,... |
SourceID | doaj pubmedcentral proquest gale crossref |
SourceType | Open Website Open Access Repository Aggregation Database Index Database |
StartPage | 7948 |
SubjectTerms | Adaptability Contour Expansion Module Cross-modality Graph Sampler Methods modality discrepancy Multi-Modal Data Neural networks Semantics Surveillance VI Re-ID |
SummonAdditionalLinks | – databaseName: DOAJ Directory of Open Access Journals dbid: DOA link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1Nb9QwEB2hnuCAgFIRWJBBSJyixnH8dWxRty2HFYIW9WbZXrutVFLU3f5_ZpzsarcceuFqO4ozz-N5k0yeAT77pslWS47-HZu6U_MiecvrFBSS2egbmUu1xUydnHffLuTFxlFfVBM2yAMPhtu32qrARUKH9rh3ZhNCF3n2WWMs0ybS7osxb5VMjamWwMxr0BESmNTvL5DFGFx5Ziv6FJH-f7fih-WRG_Fm-gKej0SRHQwTfAlPUv8Knm3IB-5CPCa1afbTU1V4f1kfYkSas_JLbU1fm_1vdtRfEaz0CpDNhopvhjSV_bpGV7hJ9Wmf76gEnX0vzJv9wKb5WEBUMHsN59Ojs68n9XhoQh0xs1vWaR64krlrQqeiibxtopFtlw0SFcujtCpaYaL1yNtSQr4lg9RFh68VWXkr9mCnv-3TG2DBB44Xc99G30XRhNYGjPeN8ALv0poKPq2M6f4M2hgOcwqyuFtbvIJDMvN6AMlZlwYE2Y0gu8dAruALgeTI6RCS6Md_B3CeJF_lDrQmpoJssILJCkc3euPCtUZZkhaTuoKP6270I_o44vt0e09jNCWX0rYVmC38t6a-3dNfXxVFbiSpokMm9_Z_POw7eEpn2g-FbBPYWd7dp_fIfJbhQ1nkfwHsCwH3 priority: 102 providerName: Directory of Open Access Journals – databaseName: ProQuest Technology Collection dbid: 8FG link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV3PbxQhFCZaL3ow_oyj1aAx8UQ6DMOvk2lNt9VDY9Sa3ggw0DbR2bq7_f99j2HXriZeYSYQHo_3Pfj4IOStb9tsteTg37FlvRqK5C1nKSgAs9G3Mhe2xYk6Pu0_ncmzuuG2rLTK9ZpYFuphHnGPfK8zyqI8lNTvr34xfDUKT1frExq3yR0OkQYpXWZ2tEm4BORfk5qQgNR-bwlYxsD8M1sxqEj1_7sg_02SvBF1Zg_I_QoX6f5k34fkVhofkXs3RAQfk3iEmtP0q0du-HjODiAuDbRcrGV45ux_0sPxAo2LG4H0ZOJ9UwCr9PslOMSPxD6OeYFEdPq54G_6BYqGSiMqlntCTmeH3z4cs_p0AouQ361YGgJXMvdt6FU0kXdtNLLrswG4YnmUVkUrTLQe0FtKgLpkkLqo8XUiK2_FU7Izzsf0jNDgA4efue-i76NoQ2cDRP1WeAGtdKYhb9aD6a4mhQwHmQWOuNuMeEMOcJg3H6CodSmYL85d9RFntVWBiwRrt4cwmU0IfeTZZw2wRZvYkHdoJIeuByaJvt4ggH6iiJXb1xrxCmDChuyu7eiqTy7dnxnUkNebavAmPCLxY5pf4zcaU0xpu4aYLftvdX27Zry8KLrcAFVFD3ju-f9bf0Hu4pv1E1Ftl-ysFtfpJSCbVXhVpu9vkqP5gg priority: 102 providerName: ProQuest |
Title | Graph Sampling-Based Multi-Stream Enhancement Network for Visible-Infrared Person Re-Identification |
URI | https://www.proquest.com/docview/2869630057 https://www.proquest.com/docview/2870144592 https://pubmed.ncbi.nlm.nih.gov/PMC10534846 https://doaj.org/article/9796b13e467a429f8bb4c1faf781978c |
Volume | 23 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lb9QwEB71cYEDojxEoF0ZhMQpkNhxbB8Q6qLdFiRWVWHR3iLb67SVSha2Wwn-PTNOdtVAD73kkDiKNY_MN_b4G4DXNstqo2SO_u2ztCjnkfI2T4MrEcx6m8k6VltMyuNp8XkmZ1uw7rHZCfDq1tSO-klNl5dvf__68wEd_j1lnJiyv7tCjKLRrvQ27GJAUtTI4Eux2UzgQsSG1nSmK8V4mLUEQ_1Xe2Epsvf__4_-t27yRiAaP4QHHYJkh63K92ArNI_g_g1ewcfgj4iGmn21VC7enKVDDFVzFs_aprQNbX-wUXNO-qa1QTZpS8EZ4lf2_QJ95DKkn5p6SbXp7CRCcnaKt-ZdZVFU5hOYjkffPh6nXTeF1GPKt0rD3OWlrIvMFaXXPueZ15IXtUYEY3IvTemN0N5YBHQhIBCTTqpI0MdFXVojnsJOs2jCM2DOuhxfzi33tvAic9w4BAKZsAK_wnUCr9bCrH62pBkVJhsk8Woj8QSGJObNAOK5jjcWy7Oqc5vKKFO6XAT8nVuMnLV2rvB5bWuFSEZpn8AbUlJF9oEq8bY7VIDzJF6r6lApgjAIExPYX-uxWltZxXVpiHNMqgRebh6jg9GuiW3C4prGKMo6peEJ6J7-e1PvP2kuziNVN6JXUSDEe37neb6Ae9TRvi1j24ed1fI6HCDuWbkBbKuZwqseHw1gdzianJwO4hrCINr7X1PoBVA |
linkProvider | Scholars Portal |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1Lb9QwEB5V7QE4VOUlAgUMAnGKmjhxYh8QaqHbXVpWCFrUm7Edp61UsmV3K8Sf4jcy4yRLFyRuvdqJbM3D8409_gzwwiRJrUqRon-7JM6LKlDeprG3BYJZZxJRh2qLcTE8yt8fi-MV-NXfhaGyyn5NDAt1NXG0R77FZaGIHkqUby6-x_RqFJ2u9k9otGax73_-wJRt9nr0DvX7kvPB7uHbYdy9KhA7TH3msa9sWog6T2xeOOlSnjgpeF5LjOQqdUIVTmXSKYPAxnsEJMKKMhDV8awuDJEv4ZK_lmcYyelm-mBvkeBlmO-17EXYmWzNEDtJtHe5FPPC0wD_BoC_izKvRLnBBqx38JRtt_Z0G1Z8cwduXSEtvAtujziu2WdDtejNSbyDcbBi4SJvTGfc5hvbbU7JmGjjkY3bOnOG4Jh9OUMHPPfxqKmnVPjOPga8zz5hU9WVLQVLuQdH1yLU-7DaTBr_AJg1NsWfU8OdyV2WWK4soowkMxmOwmUEz3th6ouWkUNjJkMS1wuJR7BDYl58QCTaoWEyPdGdT2pVqsKmmcdYYTAs19La3KW1qUuESaV0EbwiJWlydVSJM92NBZwnkWbp7bIkfIQYNILNXo-6WwNm-o_FRvBs0Y3eS0cypvGTS_qmpJRWKB6BXNL_0tSXe5qz08ADjtA4yxE_Pvz_6E_hxvDww4E-GI33H8FNjiJvi-Q2YXU-vfSPEVXN7ZNgygy-Xrfv_AZdtjQh |
linkToPdf | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1bb9MwFD6aOgnBA-IqwgYYBOIpWuLc7AeEVtayMlRVg6G9Gduxt0ksHW0nxF_j13GOk5YVJN72GidKdC4-34k_fwZ4qZPEy6pIMb9tEudlHSRv09iZEsGs1UnhA9tiXO4f5R-Oi-MN-LXcC0O0yuWcGCbqemrpH_kOF6UkeShs4H1Hi5jsDd9efI_pBClaaV0ep9GGyIH7-QPbt_mb0R76-hXnw8Hnd_txd8JAbLENWsSuNmlZ-DwxeWmFTXliRcFzL7Cqy9QWsrQyE1ZqBDnOITgpTFEF0Tqe-VKTEBNO_5sVdUU92OwPxpPDVbuXYffXahllmUx25oikBEa_WKuA4aCAf8vB3xTNKzVveAdud2CV7bbRdRc2XHMPbl2RMLwP9j0pXrNPmpjpzUncx6pYs7CtN6YVb33OBs0phRb9hmTjlnXOECqzL2eYjt9cPGr8jGjwbBLQPzvES3VHYgpx8wCOrsWsD6HXTBv3CJjRJsWHU82tzm2WGC4NYo4k0xm-hYsIXiyNqS5afQ6FfQ1ZXK0sHkGfzLy6gSS1w4Xp7ER1GapkJUuTZg4rh8Yi7YUxuU299hWCpkrYCF6TkxQlPrrE6m7_An4nSWip3aoitISINILtpR9VNyPM1Z_4jeD5ahhzmRZodOOml3RPRQ1uIXkEYs3_a5--PtKcnQZVcATKWY5o8vH_3_4MbmDeqI-j8cEW3ORo8ZYxtw29xezSPUGItTBPu1hm8PW60-c3xfE5sw |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Graph+Sampling-Based+Multi-Stream+Enhancement+Network+for+Visible-Infrared+Person+Re-Identification&rft.jtitle=Sensors+%28Basel%2C+Switzerland%29&rft.au=Jiang%2C+Jinhua&rft.au=Xiao%2C+Junjie&rft.au=Wang%2C+Renlin&rft.au=Li%2C+Tiansong&rft.date=2023-09-01&rft.pub=MDPI+AG&rft.issn=1424-8220&rft.eissn=1424-8220&rft.volume=23&rft.issue=18&rft_id=info:doi/10.3390%2Fs23187948&rft.externalDocID=A771812545 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1424-8220&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1424-8220&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1424-8220&client=summon |