An extended variational autoencoder for cross-subject electromyograph gesture recognition
•The study proposes a novel cross-subject gesture recognition approach.•An extended VAE is designed to disentangle input data into three representations.•A competitive voting strategy is to further bolster accuracy and stability in recognition.•The performance of the proposed method is evaluated on...
Saved in:
Published in | Biomedical signal processing and control Vol. 99; p. 106828 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
Elsevier Ltd
01.01.2025
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | •The study proposes a novel cross-subject gesture recognition approach.•An extended VAE is designed to disentangle input data into three representations.•A competitive voting strategy is to further bolster accuracy and stability in recognition.•The performance of the proposed method is evaluated on the Myo dataset.•The source code and the Myo dataset will be publicly available.
Surface electromyographic hand gesture recognition has gained significant attention in recent years, especially within the field of human–computer interfaces. However, cross-subject tasks remain challenging due to inherent individual differences. To address this, a novel approach for hand gesture recognition is proposed that leverages a subject-generalized variational autoencoder. This approach involves an extended variational autoencoder designed to disentangle input data into three distinct feature-specific representations. The primary classifier within the variational autoencoder focuses on gesture recognition, while two auxiliary classifiers work together to extract subject-specific and gesture-specific features. The gesture-specific features capture generalized characteristics applicable across all subjects, enabling direct application to new subjects. To enhance accuracy and stability, a competitive voting strategy is implemented. The effectiveness of the proposed method was evaluated using a dataset comprising six representative gestures performed by eight subjects. Comparative analysis with baseline models shows that our approach outperforms others, demonstrating superior generalization with an average accuracy of 90.52% in cross-subject validation. |
---|---|
ISSN: | 1746-8094 |
DOI: | 10.1016/j.bspc.2024.106828 |