Comparison of video-based algorithms for 2D human kinematics estimation: a preliminary study

Many research efforts have been spent developing robust video-based algorithms for human pose estimation. Our goal was to compare video-based algorithms for pose estimation for gait analysis. We conducted an experiment with a healthy subject performing walking sessions on a treadmill at three differ...

Full description

Saved in:
Bibliographic Details
Published inJournal of physics. Conference series Vol. 2590; no. 1; pp. 12002 - 12012
Main Authors Ceriola, Luca, Mileti, Ilaria, Donati, Marco, Patanè, Fabrizio
Format Journal Article
LanguageEnglish
Published Bristol IOP Publishing 01.09.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Many research efforts have been spent developing robust video-based algorithms for human pose estimation. Our goal was to compare video-based algorithms for pose estimation for gait analysis. We conducted an experiment with a healthy subject performing walking sessions on a treadmill at three different speeds: slow (3.6 km/h), medium (5 km/h), and high (7 km/h). An RGB 4k camera was placed laterally on the sagittal plane. Four algorithms were compared: (i) colour threshold filtering with blob-analysis, and three Deep Learning-based markerless algorithms (ii) TC-Former, (iii) FastPose and (iv) Blazepose. For colour threshold filtering with the blob-analysis algorithm, six magenta passive markers were placed over the joint centres of the subject’s lower limb. All selected deep learning-based markerless algorithms are supported by various open-source pose estimation toolboxes and are pre-trained on several whole-body keypoint datasets. The 2D trajectories of the joint centres were compared considering the root mean square error and Pearson’s coefficient. Preliminary results showed high correlations between marker and markerless algorithms for all walking speeds. TC-Former generally performed better with root mean square error on trajectories below 35 mm and did not suffer from self-occlusion issues.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1742-6588
1742-6596
DOI:10.1088/1742-6596/2590/1/012002