Meta-Transfer-Learning-Based Multimodal Human Pose Estimation for Lower Limbs

Accurate and reliable human pose estimation (HPE) is essential in interactive systems, particularly for applications requiring personalized adaptation, such as controlling cooperative robots and wearable exoskeletons, especially for healthcare monitoring equipment. However, continuously maintaining...

Full description

Saved in:
Bibliographic Details
Published inSensors (Basel, Switzerland) Vol. 25; no. 5; p. 1613
Main Authors Du, Guoming, Zhu, Haiqi, Ding, Zhen, Huang, Hong, Bie, Xiaofeng, Jiang, Feng
Format Journal Article
LanguageEnglish
Published Switzerland MDPI AG 06.03.2025
MDPI
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Accurate and reliable human pose estimation (HPE) is essential in interactive systems, particularly for applications requiring personalized adaptation, such as controlling cooperative robots and wearable exoskeletons, especially for healthcare monitoring equipment. However, continuously maintaining diverse datasets and frequently updating models for individual adaptation are both resource intensive and time-consuming. To address these challenges, we propose a meta-transfer learning framework that integrates multimodal inputs, including high-frequency surface electromyography (sEMG), visual-inertial odometry (VIO), and high-precision image data. This framework improves both accuracy and stability through a knowledge fusion strategy, resolving the data alignment issue, ensuring seamless integration of different modalities. To further enhance adaptability, we introduce a training and adaptation framework with few-shot learning, facilitating efficient updating of encoders and decoders for dynamic feature adjustment in real-time applications. Experimental results demonstrate that our framework provides accurate, high-frequency pose estimations, particularly for intra-subject adaptation. Our approach enables efficient adaptation to new individuals with only a few new samples, providing an effective solution for personalized motion analysis with minimal data.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:1424-8220
1424-8220
DOI:10.3390/s25051613