Prosperous Human Gait Recognition: an end-to-end system based on pre-trained CNN features selection
Human Gait Recognition (HGR) is a biometric approach, widely used for security purposes from the past few decades. In HGR, the change in an individual walk along with wearing clothes and carrying bag are major covariant controls which impact the performance of a system. Moreover, recognition under v...
Saved in:
Published in | Multimedia tools and applications Vol. 83; no. 5; pp. 14979 - 14999 |
---|---|
Main Authors | , , , , , , , |
Format | Journal Article |
Language | English |
Published |
New York
Springer US
01.02.2024
Springer Nature B.V |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Human Gait Recognition (HGR) is a biometric approach, widely used for security purposes from the past few decades. In HGR, the change in an individual walk along with wearing clothes and carrying bag are major covariant controls which impact the performance of a system. Moreover, recognition under various view angles is another key challenge in HGR. In this work, a novel fully automated method is proposed for HGR under various view angles using deep learning. Four primary steps are involved such as: preprocessing of original video frames, exploiting pre-trained Densenet-201 CNN model for features extraction, reduction of additional features from extracted vector based on a hybrid selection method, and finally recognition using supervised learning methods. The extraction of CNN features is a key step in which our target is to extract the most active features. To achieve this goal, we fuse the features of both second last and third last layers in a parallel process. At a later stage, best features are selected by the Firefly algorithm and Skewness based approach. These selected features are serially combined and fed to One against All Multi Support Vector Machine (OAMSVM) for final recognition. Three different angles 18
0
, 36
0
and 54
0
of the CASIA B dataset are selected for the evaluation process and accuracy of 94.3%, 93.8% and 94.7% is achieved respectively. Results show significant improvement in accuracy and recall rate as compared to the existing state-of-the-art techniques. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ISSN: | 1573-7721 1380-7501 1573-7721 |
DOI: | 10.1007/s11042-020-08928-0 |