Multiple motion pattern augmentation assisted gait recognition
Gait recognition aims to learn the unique walking patterns of different subjects for identity retrieval. Most methods focus on exploiting robust spatio-temporal representations via global–local feature learning or multi-scale temporal modeling. However, they may neglect the dynamic posture change wi...
Saved in:
Published in | Signal processing Vol. 238; p. 110185 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
Elsevier B.V
01.01.2026
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Gait recognition aims to learn the unique walking patterns of different subjects for identity retrieval. Most methods focus on exploiting robust spatio-temporal representations via global–local feature learning or multi-scale temporal modeling. However, they may neglect the dynamic posture change within consecutive frames. In addition, few methods explore useful motion cues by data self-mining manner. In this paper, we propose a novel Motion Pattern Augmentation assisted gait recognition framework named GaitMPA, which explores diverse behavior characteristics from the augmented movement sequences. GaitMPA consists of three components: Motion perception and Fine-grained variation extraction Network (MFNet), Motion Pattern Augmentation (MPA), and Multi-stage Feature Aggregation (MFA). Specifically, we present MFNet to capture dynamic motion difference between neighboring frames by Motion Perception Module (MPM), and extract multi-grained body representation via Fine-grained Variation Extractor (FVE). In MPA, we transform raw sequences into four novel motion patterns to provide distinctive movement traits. Furthermore, MFA is designed to merge the multi-source features of the raw and augmented sequences, and perform multi-stage motion information aggregation. The outputs of MFNet and MFA are fused for gait recognition. Experimental results demonstrate the effectiveness of GaitMPA on five public datasets, including the CASIA-B (in-the-lab), OU-MVLP (in-the-lab), CCPG (cloth-changing), GREW (in-the-wild), and Gait3D (in-the-wild).
•We present a novel motion augmentation module to generate diverse gait sequences.•We design a novel gait recognition method, GaitMPA, to learn powerful gait features.•We validate the effectiveness of the proposed method on five public gait datasets. |
---|---|
ISSN: | 0165-1684 |
DOI: | 10.1016/j.sigpro.2025.110185 |