Enhancing and Shaping Closed-Loop Co-Adaptive Myoelectric Interfaces With Scenario-Guided Adaptive Incremental Learning

Virtual environments have been employed in the myoelectric prosthetics field as effective training and assessment tools to enhance intrinsic motivation, thereby encouraging sustained engagement in neuromuscular rehabilitation. However, motivating amputees to maintain consistent participation and per...

Full description

Saved in:
Bibliographic Details
Published inIEEE journal of biomedical and health informatics Vol. PP; pp. 1 - 12
Main Authors Li, Wei, Shao, Jiang, Shi, Ping, Li, Sujiao, Yu, Hongliu
Format Journal Article
LanguageEnglish
Published United States IEEE 07.05.2025
Subjects
Online AccessGet full text

Cover

Loading…
Abstract Virtual environments have been employed in the myoelectric prosthetics field as effective training and assessment tools to enhance intrinsic motivation, thereby encouraging sustained engagement in neuromuscular rehabilitation. However, motivating amputees to maintain consistent participation and perseverance in long-term training remains a critical challenge. To address this, we propose a scenario-guided adaptive incremental learning strategy that leverages contextual information in unknown environments to improve pseudo-label prediction accuracy. This strategy integrates two core components: Augmented Reality (AR) environment and Multimodal Progressive Domain Adversarial Neural Network (MPDANN). AR enables amputees to perform virtual prosthesis control and holographic object manipulation tasks in realistic, interactive scenarios, bridging the gap between laboratory training and daily-life usability. MPDANN Employs dual-domain classifiers through domain adversarial training, utilizing surface electromyography (sEMG) and inertial measurement unit (IMU) data to facilitate knowledge transfer across multi-source domains and achieve robust adaptation to unseen environments. A total of 16 able-bodied subjects and 2 amputee subjects completed a 5-day assessment protocol involving 10 holographic object manipulation tasks under 8 limb position conditions, using either a convolutional neural network (CNN) or MPDANN. Experimental results showed that able-bodied subjects using MPDANN achieved a 10 rate compared to the CNN baseline, reaching over 80 proficiency. While amputee subjects exhibited lower average completion rates than able-bodied subjects on the final day, the MPDANN strategy still demonstrated consistent performance improvements across both groups. This study substantiates the efficacy of integrating real-time visual feedback with a closed-loop domain adaptation algorithm, thereby enhancing sEMG recognition performance in untrained environments.
AbstractList Virtual environments have been employed in the myoelectric prosthetics field as effective training and assessment tools to enhance intrinsic motivation, thereby encouraging sustained engagement in neuromuscular rehabilitation. However, motivating amputees to maintain consistent participation and perseverance in long-term training remains a critical challenge. To address this, we propose a scenario-guided adaptive incremental learning strategy that leverages contextual information in unknown environments to improve pseudo-label prediction accuracy. This strategy integrates two core components: Augmented Reality (AR) environment and Multimodal Progressive Domain Adversarial Neural Network (MPDANN). AR enables amputees to perform virtual prosthesis control and holographic object manipulation tasks in realistic, interactive scenarios, bridging the gap between laboratory training and daily-life usability. MPDANN Employs dual-domain classifiers through domain adversarial training, utilizing surface electromyography (sEMG) and inertial measurement unit (IMU) data to facilitate knowledge transfer across multi-source domains and achieve robust adaptation to unseen environments. A total of 16 able-bodied subjects and 2 amputee subjects completed a 5-day assessment protocol involving 10 holographic object manipulation tasks under 8 limb position conditions, using either a convolutional neural network (CNN) or MPDANN. Experimental results showed that able-bodied subjects using MPDANN achieved a 10 rate compared to the CNN baseline, reaching over 80 proficiency. While amputee subjects exhibited lower average completion rates than able-bodied subjects on the final day, the MPDANN strategy still demonstrated consistent performance improvements across both groups. This study substantiates the efficacy of integrating real-time visual feedback with a closed-loop domain adaptation algorithm, thereby enhancing sEMG recognition performance in untrained environments.
Virtual environments have been employed in the myoelectric prosthetics field as effective training and assessment tools to enhance intrinsic motivation, thereby encouraging sustained engagement in neuromuscular rehabilitation. However, motivating amputees to maintain consistent participation and perseverance in long-term training remains a critical challenge. To address this, we propose a scenario-guided adaptive incremental learning strategy that leverages contextual information in unknown environments to improve pseudo-label prediction accuracy. This strategy integrates two core components: Augmented Reality (AR) environment and Multimodal Progressive Domain Adversarial Neural Network (MPDANN). AR enables amputees to perform virtual prosthesis control and holographic object manipulation tasks in realistic, interactive scenarios, bridging the gap between laboratory training and daily-life usability. MPDANN Employs dual-domain classifiers through domain adversarial training, utilizing surface electromyography (sEMG) and inertial measurement unit (IMU) data to facilitate knowledge transfer across multi-source domains and achieve robust adaptation to unseen environments. A total of 16 able-bodied subjects and 2 amputee subjects completed a 5-day assessment protocol involving 10 holographic object manipulation tasks under 8 limb position conditions, using either a convolutional neural network (CNN) or MPDANN. Experimental results showed that able-bodied subjects using MPDANN achieved a 10 rate compared to the CNN baseline, reaching over 80 proficiency. While amputee subjects exhibited lower average completion rates than able-bodied subjects on the final day, the MPDANN strategy still demonstrated consistent performance improvements across both groups. This study substantiates the efficacy of integrating real-time visual feedback with a closed-loop domain adaptation algorithm, thereby enhancing sEMG recognition performance in untrained environments.Virtual environments have been employed in the myoelectric prosthetics field as effective training and assessment tools to enhance intrinsic motivation, thereby encouraging sustained engagement in neuromuscular rehabilitation. However, motivating amputees to maintain consistent participation and perseverance in long-term training remains a critical challenge. To address this, we propose a scenario-guided adaptive incremental learning strategy that leverages contextual information in unknown environments to improve pseudo-label prediction accuracy. This strategy integrates two core components: Augmented Reality (AR) environment and Multimodal Progressive Domain Adversarial Neural Network (MPDANN). AR enables amputees to perform virtual prosthesis control and holographic object manipulation tasks in realistic, interactive scenarios, bridging the gap between laboratory training and daily-life usability. MPDANN Employs dual-domain classifiers through domain adversarial training, utilizing surface electromyography (sEMG) and inertial measurement unit (IMU) data to facilitate knowledge transfer across multi-source domains and achieve robust adaptation to unseen environments. A total of 16 able-bodied subjects and 2 amputee subjects completed a 5-day assessment protocol involving 10 holographic object manipulation tasks under 8 limb position conditions, using either a convolutional neural network (CNN) or MPDANN. Experimental results showed that able-bodied subjects using MPDANN achieved a 10 rate compared to the CNN baseline, reaching over 80 proficiency. While amputee subjects exhibited lower average completion rates than able-bodied subjects on the final day, the MPDANN strategy still demonstrated consistent performance improvements across both groups. This study substantiates the efficacy of integrating real-time visual feedback with a closed-loop domain adaptation algorithm, thereby enhancing sEMG recognition performance in untrained environments.
Author Li, Wei
Shao, Jiang
Shi, Ping
Yu, Hongliu
Li, Sujiao
Author_xml – sequence: 1
  givenname: Wei
  orcidid: 0000-0002-4808-4659
  surname: Li
  fullname: Li, Wei
  email: fbrcn1017@163.com
  organization: Institute of Rehabilitation Engineering and Technology, University of Shanghai for Science and Technology, Shanghai, China
– sequence: 2
  givenname: Jiang
  surname: Shao
  fullname: Shao, Jiang
  email: shaojiang41@163.com
  organization: Institute of Rehabilitation Engineering and Technology, University of Shanghai for Science and Technology, Shanghai, China
– sequence: 3
  givenname: Ping
  orcidid: 0000-0001-7955-5567
  surname: Shi
  fullname: Shi, Ping
  email: rehabishi@163.com
  organization: Institute of Rehabilitation Engineering and Technology, University of Shanghai for Science and Technology, Shanghai, China
– sequence: 4
  givenname: Sujiao
  surname: Li
  fullname: Li, Sujiao
  email: Sujiaoli2015@foxmail.com
  organization: Institute of Rehabilitation Engineering and Technology, University of Shanghai for Science and Technology, Shanghai, China
– sequence: 5
  givenname: Hongliu
  orcidid: 0000-0001-6886-5498
  surname: Yu
  fullname: Yu, Hongliu
  email: yhl98@hotmail.com
  organization: Institute of Rehabilitation Engineering and Technology, University of Shanghai for Science and Technology, Shanghai, China
BackLink https://www.ncbi.nlm.nih.gov/pubmed/40333104$$D View this record in MEDLINE/PubMed
BookMark eNpNkU9v2zAMxYWhQ5t2_QADhkHHXZzpjy1bxyxI2wwZekiHHg1aohcNjuRJTod8-zpIUpQXPhC_RwJ81-TCB4-EfOZsyjnT33_-eFhOBRPFVBaqLLn8QCaCqyoTglUXZ811fkVuU_rLxqrGkVaX5CpnUkrO8gn5v_Ab8Mb5PxS8pesN9Ac970JCm61C6Ok8ZDML_eBekP7aB-zQDNEZuvQDxhYMJvrshg1dG_QQXcjud86ipW-mpTcRt-gH6OgKIfrxwifysYUu4e2p35Dfd4un-UO2erxfzmerzHDNh8yUGhRwYNqiKRRXpikKq5qiMVzohimR52BbxgBKa23ZtADaCis4M3mlc3lDvh339jH822Ea6q1LBrsOPIZdqqVgXGpVSDaiX0_ortmirfvothD39flZI8CPgIkhpYjtG8JZfcikPmRSHzKpT5mMni9Hj0PEd7yudClL-Qr7NIgm
CODEN IJBHA9
ContentType Journal Article
DBID 97E
RIA
RIE
AAYXX
CITATION
NPM
7X8
DOI 10.1109/JBHI.2025.3567713
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005-present
IEEE All-Society Periodicals Package (ASPP) 1998-Present
IEEE Electronic Library (IEL)
CrossRef
PubMed
MEDLINE - Academic
DatabaseTitle CrossRef
PubMed
MEDLINE - Academic
DatabaseTitleList
MEDLINE - Academic
PubMed
Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Medicine
EISSN 2168-2208
EndPage 12
ExternalDocumentID 40333104
10_1109_JBHI_2025_3567713
10989737
Genre orig-research
Journal Article
GroupedDBID 0R~
4.4
6IF
6IH
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACIWK
ACPRK
AENEX
AFRAH
AGQYO
AHBIQ
AKJIK
AKQYR
ALMA_UNASSIGNED_HOLDINGS
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
EBS
HZ~
IFIPE
IPLJI
JAVBF
M43
O9-
OCL
PQQKQ
RIA
RIE
RNS
AAYXX
AGSQL
CITATION
EJD
RIG
NPM
7X8
ID FETCH-LOGICAL-c191t-c79a6a1a09dec5616cb55d6b5bc129b06244adf00aa7ddd7bfaa9d2d210c48943
IEDL.DBID RIE
ISSN 2168-2194
2168-2208
IngestDate Wed Jul 02 04:32:05 EDT 2025
Mon Jul 21 06:07:22 EDT 2025
Tue Jul 01 04:51:21 EDT 2025
Wed Aug 27 01:53:16 EDT 2025
IsPeerReviewed true
IsScholarly true
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c191t-c79a6a1a09dec5616cb55d6b5bc129b06244adf00aa7ddd7bfaa9d2d210c48943
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ORCID 0000-0001-7955-5567
0000-0002-4808-4659
0000-0001-6886-5498
PMID 40333104
PQID 3201396530
PQPubID 23479
PageCount 12
ParticipantIDs crossref_primary_10_1109_JBHI_2025_3567713
ieee_primary_10989737
pubmed_primary_40333104
proquest_miscellaneous_3201396530
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2025-May-07
PublicationDateYYYYMMDD 2025-05-07
PublicationDate_xml – month: 05
  year: 2025
  text: 2025-May-07
  day: 07
PublicationDecade 2020
PublicationPlace United States
PublicationPlace_xml – name: United States
PublicationTitle IEEE journal of biomedical and health informatics
PublicationTitleAbbrev JBHI
PublicationTitleAlternate IEEE J Biomed Health Inform
PublicationYear 2025
Publisher IEEE
Publisher_xml – name: IEEE
SSID ssj0000816896
Score 2.434194
Snippet Virtual environments have been employed in the myoelectric prosthetics field as effective training and assessment tools to enhance intrinsic motivation,...
SourceID proquest
pubmed
crossref
ieee
SourceType Aggregation Database
Index Database
Publisher
StartPage 1
SubjectTerms augmented reality (AR)
domain adversarial training
incremental learning
Myoelectric interface
surface electromyography (sEMG)
Title Enhancing and Shaping Closed-Loop Co-Adaptive Myoelectric Interfaces With Scenario-Guided Adaptive Incremental Learning
URI https://ieeexplore.ieee.org/document/10989737
https://www.ncbi.nlm.nih.gov/pubmed/40333104
https://www.proquest.com/docview/3201396530
Volume PP
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lb9QwELZoD4gLzwJbHjISJyQvSWzHm2NZtSwV20up6C2yPZPuChSv2kQIfj1jJ7sCpErcfIgTx-PH982Tsbe-nIHEXImCmJpQYBthATOhUfsScVZ4E2OHl2fl4kKdXurLMVg9xcIgYnI-w2lsJls-BN9HVRnt8GpWGWn22B4xtyFYa6dQSRUkUj2ughqCdqIarZjU7f3ph8UnYoOFnkpdGpPLv-6hVFjldoyZ7pqTB-xsO8rBxeTbtO_c1P_6J4Hjf__GQ3Z_RJ38aFgmj9gdbB-zu8vRrv6E_ThuVzHxRnvFbQv8fGVjGBWffw83COJzCBs-D-II7Caejnz5Mwzlc9aeJ5ViEx27-Nd1t-LnHlvi30F87NeAwHed6CQadJE0kDGr69UBuzg5_jJfiLEkg_BE7DrhTWVLm9usAvQEvUrvtIbSaecJOLisJLRgockyaw0AGNdYW0EBRCy9iqnen7L9NrT4nHGbo3Z5Ey2_uQIDDhRRR-sVNlA5aSfs3VZA9WbIvFEnxpJVdZRmHaVZj9KcsIM4z388OEzxhL3ZyrSmfRONIbbF0N_Usojgt9Qym7Bng7B3vVUmJcFedXjLW1-we_Hjye_RvGT73XWPrwibdO51WpO_Ace84cY
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1JbxMxFLagSMCFtUBYjcQJyWEWL5ljiVrSkuTSVvQ2sv3eNBFoJqIzQvDrefZMIkCqxM2HseXx8_J9b2XsndcTyDGVIiOmJiTYSljARChUXiNOMm9C7PBiqWfn8uRCXQzB6jEWBhGj8xmOQzPa8qHxXVCV0QkvJoXJzU12ix5-lfbhWjuVSqwhEStyZdQQdBblYMekjh9OPs6OiQ9mapwrbUya__USxdIq16PM-Noc3WfL7Tx7J5Ov4651Y__rnxSO__0jD9i9AXfyg36jPGQ3sH7Ebi8Gy_pj9uOwXoXUG_UltzXw05UNgVR8-q25QhDzptnwaSMOwG7C_cgXP5u-gM7a86hUrIJrF_-yblf81GNNDLwRn7o1IPBdJ7qLem0kTWTI63q5z86PDs-mMzEUZRCeqF0rvCmstqlNCkBP4Et7pxRop5wn6OASTXjBQpUk1hoAMK6ytoAMiFp6GZK9P2F7dVPjM8ZtisqlVbD9phIMOJBEHq2XWEHhcjti77cCKjd97o0ycpakKIM0yyDNcpDmiO2Hdf7jw36JR-ztVqYlnZxgDrE1Nt1VmWcB_mqVJyP2tBf2rrdM8pyAr3x-zahv2J3Z2WJezo-Xn1-wu2Ei0QvSvGR77fcOXxFSad3ruD9_A7X-5Q8
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Enhancing+and+Shaping+Closed-Loop+Co-Adaptive+Myoelectric+Interfaces+With+Scenario-Guided+Adaptive+Incremental+Learning&rft.jtitle=IEEE+journal+of+biomedical+and+health+informatics&rft.au=Li%2C+Wei&rft.au=Shao%2C+Jiang&rft.au=Shi%2C+Ping&rft.au=Li%2C+Sujiao&rft.date=2025-05-07&rft.eissn=2168-2208&rft.volume=PP&rft_id=info:doi/10.1109%2FJBHI.2025.3567713&rft_id=info%3Apmid%2F40333104&rft.externalDocID=40333104
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2168-2194&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2168-2194&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2168-2194&client=summon