A Multiviewpoint Outdoor Dataset for Human Action Recognition

Advancements in deep neural networks have contributed to near-perfect results for many computer vision problems, such as object recognition, face recognition, and pose estimation. However, human action recognition is still far from human-level performance. Owing to the articulated nature of the huma...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on human-machine systems Vol. 50; no. 5; pp. 405 - 413
Main Authors Perera, Asanka G., Law, Yee Wei, Ogunwa, Titilayo T., Chahl, Javaan
Format Journal Article
LanguageEnglish
Published New York IEEE 01.10.2020
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Advancements in deep neural networks have contributed to near-perfect results for many computer vision problems, such as object recognition, face recognition, and pose estimation. However, human action recognition is still far from human-level performance. Owing to the articulated nature of the human body, it is challenging to detect an action from multiple viewpoints, particularly from an aerial viewpoint. This is further compounded by a scarcity of datasets that cover multiple viewpoints of actions. To fill this gap and enable research in wider application areas, in this article we present a multiviewpoint outdoor action recognition dataset collected from YouTube and our own drone. The dataset consists of 20 dynamic human action classes, 2324 video clips, and 503 086 frames. All videos are cropped and resized to 720 × 720 without distorting the original aspect ratio of the human subjects in videos. This dataset should be useful to many research areas, including action recognition, surveillance, and situational awareness. We evaluate the dataset with a two-stream convolutional neural network architecture coupled with a recently proposed temporal pooling scheme called kernelized rank pooling that produces nonlinear feature subspace representations. The overall baseline action recognition accuracy is 74.0%.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2168-2291
2168-2305
DOI:10.1109/THMS.2020.2971958