The THUMOS challenge on action recognition for videos “in the wild”

•THUMOS challenge was introduced in 2013 to serve as a benchmark for action recognition.•In this paper we describe the THUMOS benchmark in detail.•Give an overview of data collection and annotation procedures.•Present results of submissions to the THUMOS 2015 challenge and review the participating a...

Full description

Saved in:
Bibliographic Details
Published inComputer vision and image understanding Vol. 155; pp. 1 - 23
Main Authors Idrees, Haroon, Zamir, Amir R., Jiang, Yu-Gang, Gorban, Alex, Laptev, Ivan, Sukthankar, Rahul, Shah, Mubarak
Format Journal Article
LanguageEnglish
Published Elsevier Inc 01.02.2017
Elsevier
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:•THUMOS challenge was introduced in 2013 to serve as a benchmark for action recognition.•In this paper we describe the THUMOS benchmark in detail.•Give an overview of data collection and annotation procedures.•Present results of submissions to the THUMOS 2015 challenge and review the participating approaches.•We conclude by proposing several directions and improvements for future THUMOS challenges. Automatically recognizing and localizing wide ranges of human actions are crucial for video understanding. Towards this goal, the THUMOS challenge was introduced in 2013 to serve as a benchmark for action recognition. Until then, video action recognition, including THUMOS challenge, had focused primarily on the classification of pre-segmented (i.e., trimmed) videos, which is an artificial task. In THUMOS 2014, we elevated action recognition to a more practical level by introducing temporally untrimmed videos. These also include ‘background videos’ which share similar scenes and backgrounds as action videos, but are devoid of the specific actions. The three editions of the challenge organized in 2013–2015 have made THUMOS a common benchmark for action classification and detection and the annual challenge is widely attended by teams from around the world. In this paper we describe the THUMOS benchmark in detail and give an overview of data collection and annotation procedures. We present the evaluation protocols used to quantify results in the two THUMOS tasks of action classification and temporal action detection. We also present results of submissions to the THUMOS 2015 challenge and review the participating approaches. Additionally, we include a comprehensive empirical study evaluating the differences in action recognition between trimmed and untrimmed videos, and how well methods trained on trimmed videos generalize to untrimmed videos. We conclude by proposing several directions and improvements for future THUMOS challenges.
ISSN:1077-3142
1090-235X
DOI:10.1016/j.cviu.2016.10.018