From Human Pose to On-Body Devices for Human-Activity Recognition
Human Activity Recognition (HAR), using inertial measurements from on-body devices, has not seen a great advantage from deep architectures. This drawback is mainly due to the lack of annotated data, diversity of on-body device configurations, the class-unbalance problem, and non-standard human activ...
Saved in:
Published in | 2020 25th International Conference on Pattern Recognition (ICPR) pp. 10066 - 10073 |
---|---|
Main Authors | , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
10.01.2021
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Human Activity Recognition (HAR), using inertial measurements from on-body devices, has not seen a great advantage from deep architectures. This drawback is mainly due to the lack of annotated data, diversity of on-body device configurations, the class-unbalance problem, and non-standard human activity definitions. Approaches for improving the performance of such architectures, e.g., transfer learning, are therefore difficult to apply. This paper introduces a method for transfer learning from human-pose estimations as a source for improving HAR using inertial measurements obtained from on-body devices. We propose to fine-tune deep architectures, trained using sequences of human poses from a large dataset and their derivatives, for solving HAR on inertial measurements from on-body devices. Derivatives of human poses will be considered as a sort of synthetic data for HAR. We deploy two different temporal-convolutional architectures as classifiers. An evaluation of the method is carried out on three benchmark datasets improving the classification performance. |
---|---|
DOI: | 10.1109/ICPR48806.2021.9412283 |