Ghost-HRNet: a lightweight high-resolution network for efficient human pose estimation with enhanced multi-scale feature fusion

Human pose estimation (HPE) is a critical task in computer vision, with applications spanning human-computer interaction, intelligent surveillance, behavior analysis, virtual reality, and medical diagnosis. However, existing high-resolution networks (HRNet) face challenges due to their large paramet...

Full description

Saved in:
Bibliographic Details
Published inPattern analysis and applications : PAA Vol. 28; no. 2
Main Authors Zheng, Xiaoyu, Zhuang, Liping, Chen, Dewang
Format Journal Article
LanguageEnglish
Published London Springer London 01.06.2025
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Human pose estimation (HPE) is a critical task in computer vision, with applications spanning human-computer interaction, intelligent surveillance, behavior analysis, virtual reality, and medical diagnosis. However, existing high-resolution networks (HRNet) face challenges due to their large parameter sizes and low computational efficiency, limiting their real-time applicability. To address these issues, this paper introduces Ghost-HRNet, a lightweight HPE network that integrates the efficient feature extraction capabilities of the Ghost module with the multi-scale feature fusion strengths of HRNet. By incorporating depthwise separable convolution and the convolutional block attention module (CBAM), Ghost-HRNet achieves significant reductions in parameter count and computational load while maintaining high accuracy. Experimental results on the COCO and MPII datasets demonstrate that Ghost-HRNet achieves average accuracies of 66% and 87.26%, respectively, while reducing the parameter size by 71.3% and computational load by 79.0% compared to HRNet. This combination of efficiency and accuracy makes Ghost-HRNet particularly suitable for real-time applications, underscoring its potential to advance HPE technology.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1433-7541
1433-755X
DOI:10.1007/s10044-025-01440-x