FineGym: A Hierarchical Video Dataset for Fine-Grained Action Understanding

On public benchmarks, current action recognition techniques have achieved great success. However, when used in real-world applications, e.g. sport analysis, which requires the capability of parsing an activity into phases and differentiating between subtly different actions, their performances remai...

Full description

Saved in:
Bibliographic Details
Published in2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) pp. 2613 - 2622
Main Authors Shao, Dian, Zhao, Yue, Dai, Bo, Lin, Dahua
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.01.2020
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:On public benchmarks, current action recognition techniques have achieved great success. However, when used in real-world applications, e.g. sport analysis, which requires the capability of parsing an activity into phases and differentiating between subtly different actions, their performances remain far from being satisfactory. To take action recognition to a new level, we develop FineGym, a new dataset built on top of gymnasium videos. Compared to existing action recognition datasets, FineGym is distinguished in richness, quality, and diversity. In particular, it provides temporal annotations at both action and sub-action levels with a three-level semantic hierarchy. For example, a "balance beam" activity will be annotated as a sequence of elementary sub-actions derived from five sets: "leap-jump-hop", "beam-turns", "flight-salto", "flight-handspring", and "dismount", where the sub-action in each set will be further annotated with finely defined class labels. This new level of granularity presents significant challenges for action recognition, e.g. how to parse the temporal structures from a coherent action, and how to distinguish between subtly different action classes. We systematically investigates different methods on this dataset and obtains a number of interesting findings. We hope this dataset could advance research towards action understanding.
ISSN:2575-7075
DOI:10.1109/CVPR42600.2020.00269