Beyond Augmentation: Empowering Model Robustness under Extreme Capture Environments
Person Re-identification (re-ID) in computer vision aims to recognize and track individuals across different cameras. While previous research has mainly focused on challenges like pose variations and lighting changes, the impact of extreme capture conditions is often not adequately addressed. These...
Saved in:
Main Authors | , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
18.07.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Person Re-identification (re-ID) in computer vision aims to recognize and
track individuals across different cameras. While previous research has mainly
focused on challenges like pose variations and lighting changes, the impact of
extreme capture conditions is often not adequately addressed. These extreme
conditions, including varied lighting, camera styles, angles, and image
distortions, can significantly affect data distribution and re-ID accuracy.
Current research typically improves model generalization under normal
shooting conditions through data augmentation techniques such as adjusting
brightness and contrast. However, these methods pay less attention to the
robustness of models under extreme shooting conditions. To tackle this, we
propose a multi-mode synchronization learning (MMSL) strategy . This approach
involves dividing images into grids, randomly selecting grid blocks, and
applying data augmentation methods like contrast and brightness adjustments.
This process introduces diverse transformations without altering the original
image structure, helping the model adapt to extreme variations. This method
improves the model's generalization under extreme conditions and enables
learning diverse features, thus better addressing the challenges in re-ID.
Extensive experiments on a simulated test set under extreme conditions have
demonstrated the effectiveness of our method. This approach is crucial for
enhancing model robustness and adaptability in real-world scenarios, supporting
the future development of person re-identification technology. |
---|---|
AbstractList | Person Re-identification (re-ID) in computer vision aims to recognize and
track individuals across different cameras. While previous research has mainly
focused on challenges like pose variations and lighting changes, the impact of
extreme capture conditions is often not adequately addressed. These extreme
conditions, including varied lighting, camera styles, angles, and image
distortions, can significantly affect data distribution and re-ID accuracy.
Current research typically improves model generalization under normal
shooting conditions through data augmentation techniques such as adjusting
brightness and contrast. However, these methods pay less attention to the
robustness of models under extreme shooting conditions. To tackle this, we
propose a multi-mode synchronization learning (MMSL) strategy . This approach
involves dividing images into grids, randomly selecting grid blocks, and
applying data augmentation methods like contrast and brightness adjustments.
This process introduces diverse transformations without altering the original
image structure, helping the model adapt to extreme variations. This method
improves the model's generalization under extreme conditions and enables
learning diverse features, thus better addressing the challenges in re-ID.
Extensive experiments on a simulated test set under extreme conditions have
demonstrated the effectiveness of our method. This approach is crucial for
enhancing model robustness and adaptability in real-world scenarios, supporting
the future development of person re-identification technology. |
Author | Gong, Yunpeng Zhang, Chuangliang Hou, Yongjie Jiang, Min |
Author_xml | – sequence: 1 givenname: Yunpeng surname: Gong fullname: Gong, Yunpeng – sequence: 2 givenname: Yongjie surname: Hou fullname: Hou, Yongjie – sequence: 3 givenname: Chuangliang surname: Zhang fullname: Zhang, Chuangliang – sequence: 4 givenname: Min surname: Jiang fullname: Jiang, Min |
BackLink | https://doi.org/10.48550/arXiv.2407.13640$$DView paper in arXiv |
BookMark | eNqFjr0OgjAURjvo4N8DOHlfQCwCatyU1Li4qDup4Uqa0FvSFoS3NxB3p7N8Od-ZshEZQsaWIQ_iQ5LwjbStaoJtzPdBGO1iPmGPM3aGcjjVhUby0itDRxC6Mh-0igq4mRxLuJtX7Tyhc1BTjhZE6y1qhFRWvrYIghplDfUON2fjtywdLn6csdVFPNPrerjPKqu0tF3WZ2RDRvR_8QV7qkA3 |
ContentType | Journal Article |
Copyright | http://creativecommons.org/licenses/by-nc-nd/4.0 |
Copyright_xml | – notice: http://creativecommons.org/licenses/by-nc-nd/4.0 |
DBID | AKY GOX |
DOI | 10.48550/arxiv.2407.13640 |
DatabaseName | arXiv Computer Science arXiv.org |
DatabaseTitleList | |
Database_xml | – sequence: 1 dbid: GOX name: arXiv.org url: http://arxiv.org/find sourceTypes: Open Access Repository |
DeliveryMethod | fulltext_linktorsrc |
ExternalDocumentID | 2407_13640 |
GroupedDBID | AKY GOX |
ID | FETCH-arxiv_primary_2407_136403 |
IEDL.DBID | GOX |
IngestDate | Sun Jul 21 12:19:31 EDT 2024 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | false |
IsScholarly | false |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-arxiv_primary_2407_136403 |
OpenAccessLink | https://arxiv.org/abs/2407.13640 |
ParticipantIDs | arxiv_primary_2407_13640 |
PublicationCentury | 2000 |
PublicationDate | 2024-07-18 |
PublicationDateYYYYMMDD | 2024-07-18 |
PublicationDate_xml | – month: 07 year: 2024 text: 2024-07-18 day: 18 |
PublicationDecade | 2020 |
PublicationYear | 2024 |
Score | 3.863414 |
SecondaryResourceType | preprint |
Snippet | Person Re-identification (re-ID) in computer vision aims to recognize and
track individuals across different cameras. While previous research has mainly... |
SourceID | arxiv |
SourceType | Open Access Repository |
SubjectTerms | Computer Science - Computer Vision and Pattern Recognition |
Title | Beyond Augmentation: Empowering Model Robustness under Extreme Capture Environments |
URI | https://arxiv.org/abs/2407.13640 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwdV1Na8MwDBVdT7uMlW103zrsatY4jpPsVkpKGWyDfUBuwUmcMVi7kCajP3-WnbJeerWFEDb46cl-MsBdyXlUSeWzQEvBhOIFi7X0GJdlEBS68ryQBM5Pz3LxIR7TIB0AbrUwqtl8_br-wPn6nugGPcQShpQfGK8k5n1J3eWkbcXV2__bmRzTDu2AxPwYjvrsDqduO0Yw0KsTeHMqEZx2n8te6bN6wGRZ0wdlBjmQ_iP7xtefvFu3dPIgCbsaTDYtFe9wpmoq82Oyo0k7hdt58j5bMBtGVrueERlFmNkI_TMYGmavx4BxqKUsQyG0YSWTKlImASl5IXLlq4mn4nMY7_NysX_qEg65QV4qQHrRFQzbptPXBjnb_MYu3x9uy3PX |
link.rule.ids | 228,230,786,891 |
linkProvider | Cornell University |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Beyond+Augmentation%3A+Empowering+Model+Robustness+under+Extreme+Capture+Environments&rft.au=Gong%2C+Yunpeng&rft.au=Hou%2C+Yongjie&rft.au=Zhang%2C+Chuangliang&rft.au=Jiang%2C+Min&rft.date=2024-07-18&rft_id=info:doi/10.48550%2Farxiv.2407.13640&rft.externalDocID=2407_13640 |