Decoding Electromyographic Signal with Multiple Labels for Hand Gesture Recognition

Surface electromyography (sEMG) is a significant interaction signal in the fields of human-computer interaction and rehabilitation assessment, as it can be used for hand gesture recognition. This paper proposes a novel MLHG model to improve the robustness of sEMG-based hand gesture recognition. The...

Full description

Saved in:
Bibliographic Details
Published inIEEE signal processing letters Vol. 30; pp. 1 - 5
Main Authors Zou, Yongxiang, Cheng, Long, Han, Lijun, Li, Zhengwei, Song, Luping
Format Journal Article
LanguageEnglish
Published New York IEEE 01.01.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Surface electromyography (sEMG) is a significant interaction signal in the fields of human-computer interaction and rehabilitation assessment, as it can be used for hand gesture recognition. This paper proposes a novel MLHG model to improve the robustness of sEMG-based hand gesture recognition. The model utilizes multiple labels to decode the sEMG signals from two different perspectives. In the first view, the sEMG signals are transformed into motion signals using the proposed FES-MSCNN (Feature Extraction of sEMG with Multiple Sub-CNN modules). Furthermore, a discriminator FEM-SAGE (Feature Extraction of Motion with graph SAmple and aggreGatE model) is employed to judge the authenticity of the generated motion data. The deep features of the motion signals are extracted using the FEM-SAGE model. In the second view, the deep features of the sEMG signals are extracted using the FES-MSCNN model. The extracted features of the sEMG signals and the generated motion signals are then fused for hand gesture recognition. To evaluate the performance of the proposed model, a dataset containing sEMG signals and multiple labels from 12 subjects has been collected. The experimental results indicate that the MLHG model achieves an accuracy of <inline-formula><tex-math notation="LaTeX">99.26\%</tex-math></inline-formula> for within-session hand gesture recognition, <inline-formula><tex-math notation="LaTeX">78.47\%</tex-math></inline-formula> for cross-time, and <inline-formula><tex-math notation="LaTeX">53.52\%</tex-math></inline-formula> for cross-subject. These results represent a significant improvement compared to using only the gesture labels, with accuracy improvements of <inline-formula><tex-math notation="LaTeX">1.91\%</tex-math></inline-formula>, <inline-formula><tex-math notation="LaTeX">5.35\%</tex-math></inline-formula>, and <inline-formula><tex-math notation="LaTeX">5.25\%</tex-math></inline-formula> in the within-session, cross-time and cross-subject cases, respectively.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1070-9908
1558-2361
DOI:10.1109/LSP.2023.3264417