High‐precision skeleton‐based human repetitive action counting

A novel counting model is presented by the authors to estimate the number of repetitive actions in temporal 3D skeleton data. As per the authors’ knowledge, this is the first work of this kind using skeleton data for high‐precision repetitive action counting. Different from existing works on RGB vid...

Full description

Saved in:
Bibliographic Details
Published inIET computer vision Vol. 17; no. 6; pp. 700 - 709
Main Authors Li, Chengxian, Shao, Ming, Yang, Qirui, Xia, Siyu
Format Journal Article
LanguageEnglish
Published Stevenage John Wiley & Sons, Inc 01.09.2023
Wiley
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:A novel counting model is presented by the authors to estimate the number of repetitive actions in temporal 3D skeleton data. As per the authors’ knowledge, this is the first work of this kind using skeleton data for high‐precision repetitive action counting. Different from existing works on RGB video data, the authors’ model follows a bottom‐up pipeline to clip the sub‐action first followed by robust aggregation in inference. First, novel counting loss functions and robust inference with backtracking is proposed to pursue precise per‐frame count as well as overall count with boundary frames. Second, an efficient synthetic approach is proposed to augment skeleton data in training and thus avoid time‐consuming repetitive action data collection work. Finally, a challenging human repetitive action counting dataset named VSRep is collected with various types of action to evaluate the proposed model. Experiments demonstrate that the proposed counting model outperforms existing video‐based methods by a large margin in terms of accuracy in real‐time inference.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1751-9632
1751-9640
DOI:10.1049/cvi2.12193