Learning Discriminative and Robust Representations for UAV-View Skeleton-Based Action Recognition

Skeleton-based human action recognition has attracted much attention recently, which is a crucial topic for human action understanding. While many endeavors have been made for skeleton-based action recognition from laboratory, the performances of these models suffer from data degradation caused by v...

Full description

Saved in:
Bibliographic Details
Published inIEEE International Conference on Multimedia and Expo workshops (Online) pp. 1 - 6
Main Authors Sun, Shaofan, Zhang, Jiahang, Tang, Guo, Jia, Chuanmin, Liu, Jiaying
Format Conference Proceeding
LanguageEnglish
Published IEEE 15.07.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Skeleton-based human action recognition has attracted much attention recently, which is a crucial topic for human action understanding. While many endeavors have been made for skeleton-based action recognition from laboratory, the performances of these models suffer from data degradation caused by various factors, e.g., diverse viewpoints and object occlusion in the real world. This work focuses on the challenging Unmanned Aerial Vehicle (UAV)-view which is more aligned with real-world scenarios, and we propose a simple yet effective framework to Learn discriminative and robust Representations for UAV-view skeleton-based action recognition (LRU). Experiments under the challenging large-scale UAV dataset, UAV-Human, demonstrate the effectiveness of our method, surpassing the state-of-the-art methods by 1.62% and 6.11% under the cross-subject-v1 and cross-subject-v2 protocols, respectively.
ISSN:2995-1429
DOI:10.1109/ICMEW63481.2024.10645407