Performing content-based retrieval of humans using gait biometrics

In order to analyse surveillance video, we need to efficiently explore large datasets containing videos of walking humans. Effective analysis of such data relies on retrieval of video data which has been enriched using semantic annotations. A manual annotation process is time-consuming and prone to...

Full description

Saved in:
Bibliographic Details
Published inMultimedia tools and applications Vol. 49; no. 1; pp. 195 - 212
Main Authors Samangooei, Sina, Nixon, Mark S.
Format Journal Article
LanguageEnglish
Published Boston Springer US 01.08.2010
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In order to analyse surveillance video, we need to efficiently explore large datasets containing videos of walking humans. Effective analysis of such data relies on retrieval of video data which has been enriched using semantic annotations. A manual annotation process is time-consuming and prone to error due to subject bias however, at surveillance-image resolution, the human walk (their gait) can be analysed automatically. We explore the content-based retrieval of videos containing walking subjects, using semantic queries. We evaluate current research in gait biometrics, unique in its effectiveness at recognising people at a distance. We introduce a set of semantic traits discernible by humans at a distance, outlining their psychological validity. Working under the premise that similarity of the chosen gait signature implies similarity of certain semantic traits we perform a set of semantic retrieval experiments using popular Latent Semantic Analysis techniques. We perform experiments on a dataset of 2000 videos of people walking in laboratory conditions and achieve promising retrieval results for features such as Sex (mAP  =  14% above random), Age (mAP  =  10% above random) and Ethnicity (mAP  =  9% above random).
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ObjectType-Article-2
ObjectType-Feature-1
content type line 23
ISSN:1380-7501
1573-7721
DOI:10.1007/s11042-009-0391-8