The TUM Gait from Audio, Image and Depth (GAID) database: Multimodal recognition of subjects and traits
•Presentation of the new freely available TUM Gait from Audio, Image and Depth (GAID) database.•Advancing gait based person identification by multimodal feature extraction.•Gait based recognition of person traits: gender, age, height, shoe type.•Baseline results and fusion for gait recognition using...
Saved in:
Published in | Journal of visual communication and image representation Vol. 25; no. 1; pp. 195 - 206 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
Elsevier Inc
01.01.2014
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | •Presentation of the new freely available TUM Gait from Audio, Image and Depth (GAID) database.•Advancing gait based person identification by multimodal feature extraction.•Gait based recognition of person traits: gender, age, height, shoe type.•Baseline results and fusion for gait recognition using RGB, depth and audio.
Recognizing people by the way they walk-also known as gait recognition-has been studied extensively in the recent past. Recent gait recognition methods solely focus on data extracted from an RGB video stream. With this work, we provide a means for multimodal gait recognition, by introducing the freely available TUM Gait from Audio, Image and Depth (GAID) database. This database simultaneously contains RGB video, depth and audio. With 305 people in three variations, it is one of the largest to-date. To further investigate challenges of time variation, a subset of 32 people is recorded a second time. We define standardized experimental setups for both person identification and for the assessment of the soft biometrics age, gender, height, and shoe type. For all defined experiments, we present several baseline results on all available modalities. These effectively demonstrate multimodal fusion being beneficial to gait recognition. |
---|---|
Bibliography: | ObjectType-Article-2 SourceType-Scholarly Journals-1 ObjectType-Feature-1 content type line 23 |
ISSN: | 1047-3203 1095-9076 |
DOI: | 10.1016/j.jvcir.2013.02.006 |