Real-time fine finger motion decoding for transradial amputees with surface electromyography
Advancements in human-machine interfaces (HMIs) are pivotal for enhancing rehabilitation technologies and improving the quality of life for individuals with limb loss. This paper presents a novel CNN-Transformer model for decoding continuous fine finger motions from surface electromyography (sEMG) s...
Saved in:
Published in | Neural networks Vol. 190; p. 107605 |
---|---|
Main Authors | , , , , , , , , , , , , |
Format | Journal Article |
Language | English |
Published |
United States
Elsevier Ltd
01.10.2025
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Advancements in human-machine interfaces (HMIs) are pivotal for enhancing rehabilitation technologies and improving the quality of life for individuals with limb loss. This paper presents a novel CNN-Transformer model for decoding continuous fine finger motions from surface electromyography (sEMG) signals by integrating the convolutional neural network (CNN) and Transformer architecture, focusing on applications for transradial amputees. This model leverages the strengths of both convolutional and Transformer architectures to effectively capture both local muscle activation patterns and global temporal dependencies within sEMG signals.
To achieve high-fidelity sEMG acquisition, we designed a flexible and stretchable epidermal array electrode sleeve (EAES) that conforms to the residual limb, ensuring comfortable long-term wear and robust signal capture, critical for amputees. Moreover, we presented a computer vision (CV) based multimodal data acquisition protocol that synchronizes sEMG recordings with video captures of finger movements, enabling the creation of a large, labeled dataset to train and evaluate the proposed model.
Given the challenges in acquiring reliable labeled data for transradial amputees, we adopted transfer learning and few-shot calibration to achieve fine finger motion decoding by leveraging datasets from non-amputated subjects. Extensive experimental results demonstrate the superior performance of the proposed model in various scenarios, including intra-session, inter-session, and inter-subject evaluations. Importantly, the system also exhibited promising zero-shot and few-shot learning capabilities for amputees, allowing for personalized calibration with minimal training data. The combined approach holds significant potential for advancing real-time, intuitive control of prostheses and other assistive technologies. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
ISSN: | 0893-6080 1879-2782 1879-2782 |
DOI: | 10.1016/j.neunet.2025.107605 |