A Weight-Aware-Based Multisource Unsupervised Domain Adaptation Method for Human Motion Intention Recognition

Accurate recognition of human motion intention (HMI) is beneficial for exoskeleton robots to improve the wearing comfort level and achieve natural human-robot interaction. A classifier trained on labeled source subjects (domains) performs poorly on unlabeled target subject since the difference in in...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on cybernetics Vol. 55; no. 7; pp. 3131 - 3143
Main Authors Liu, Xiao-Yin, Li, Guotao, Zhou, Xiao-Hu, Liang, Xu, Hou, Zeng-Guang
Format Journal Article
LanguageEnglish
Published United States IEEE 01.07.2025
Subjects
Online AccessGet full text

Cover

Loading…
Abstract Accurate recognition of human motion intention (HMI) is beneficial for exoskeleton robots to improve the wearing comfort level and achieve natural human-robot interaction. A classifier trained on labeled source subjects (domains) performs poorly on unlabeled target subject since the difference in individual motor characteristics. The unsupervised domain adaptation (UDA) method has become an effective way to this problem. However, the labeled data are collected from multiple source subjects that might be different not only from the target subject but also from each other. The current UDA methods for HMI recognition ignore the difference between each source subject, which reduces the classification accuracy. Therefore, this article considers the differences between source subjects and develops a novel theory and algorithm for UDA to recognize HMI, where the margin disparity discrepancy (MDD) is extended to multisource UDA theory and a novel weight-aware-based multisource UDA algorithm (WMDD) is proposed. The source domain weight, which can be adjusted adaptively by the MDD between each source subject and target subject, is incorporated into UDA to measure the differences between source subjects. The developed multisource UDA theory is theoretical and the generalization error on target subject is guaranteed. The theory can be transformed into an optimization problem for UDA, successfully bridging the gap between theory and algorithm. Moreover, a lightweight network is employed to guarantee the real-time of classification and the adversarial learning between feature generator and ensemble classifiers is utilized to further improve the generalization ability. The extensive experiments verify theoretical analysis and show that WMDD outperforms previous UDA methods on HMI recognition tasks.
AbstractList Accurate recognition of human motion intention (HMI) is beneficial for exoskeleton robots to improve the wearing comfort level and achieve natural human-robot interaction. A classifier trained on labeled source subjects (domains) performs poorly on unlabeled target subject since the difference in individual motor characteristics. The unsupervised domain adaptation (UDA) method has become an effective way to this problem. However, the labeled data are collected from multiple source subjects that might be different not only from the target subject but also from each other. The current UDA methods for HMI recognition ignore the difference between each source subject, which reduces the classification accuracy. Therefore, this article considers the differences between source subjects and develops a novel theory and algorithm for UDA to recognize HMI, where the margin disparity discrepancy (MDD) is extended to multisource UDA theory and a novel weight-aware-based multisource UDA algorithm (WMDD) is proposed. The source domain weight, which can be adjusted adaptively by the MDD between each source subject and target subject, is incorporated into UDA to measure the differences between source subjects. The developed multisource UDA theory is theoretical and the generalization error on target subject is guaranteed. The theory can be transformed into an optimization problem for UDA, successfully bridging the gap between theory and algorithm. Moreover, a lightweight network is employed to guarantee the real-time of classification and the adversarial learning between feature generator and ensemble classifiers is utilized to further improve the generalization ability. The extensive experiments verify theoretical analysis and show that WMDD outperforms previous UDA methods on HMI recognition tasks.
Accurate recognition of human motion intention (HMI) is beneficial for exoskeleton robots to improve the wearing comfort level and achieve natural human-robot interaction. A classifier trained on labeled source subjects (domains) performs poorly on unlabeled target subject since the difference in individual motor characteristics. The unsupervised domain adaptation (UDA) method has become an effective way to this problem. However, the labeled data are collected from multiple source subjects that might be different not only from the target subject but also from each other. The current UDA methods for HMI recognition ignore the difference between each source subject, which reduces the classification accuracy. Therefore, this article considers the differences between source subjects and develops a novel theory and algorithm for UDA to recognize HMI, where the margin disparity discrepancy (MDD) is extended to multisource UDA theory and a novel weight-aware-based multisource UDA algorithm (WMDD) is proposed. The source domain weight, which can be adjusted adaptively by the MDD between each source subject and target subject, is incorporated into UDA to measure the differences between source subjects. The developed multisource UDA theory is theoretical and the generalization error on target subject is guaranteed. The theory can be transformed into an optimization problem for UDA, successfully bridging the gap between theory and algorithm. Moreover, a lightweight network is employed to guarantee the real-time of classification and the adversarial learning between feature generator and ensemble classifiers is utilized to further improve the generalization ability. The extensive experiments verify theoretical analysis and show that WMDD outperforms previous UDA methods on HMI recognition tasks.Accurate recognition of human motion intention (HMI) is beneficial for exoskeleton robots to improve the wearing comfort level and achieve natural human-robot interaction. A classifier trained on labeled source subjects (domains) performs poorly on unlabeled target subject since the difference in individual motor characteristics. The unsupervised domain adaptation (UDA) method has become an effective way to this problem. However, the labeled data are collected from multiple source subjects that might be different not only from the target subject but also from each other. The current UDA methods for HMI recognition ignore the difference between each source subject, which reduces the classification accuracy. Therefore, this article considers the differences between source subjects and develops a novel theory and algorithm for UDA to recognize HMI, where the margin disparity discrepancy (MDD) is extended to multisource UDA theory and a novel weight-aware-based multisource UDA algorithm (WMDD) is proposed. The source domain weight, which can be adjusted adaptively by the MDD between each source subject and target subject, is incorporated into UDA to measure the differences between source subjects. The developed multisource UDA theory is theoretical and the generalization error on target subject is guaranteed. The theory can be transformed into an optimization problem for UDA, successfully bridging the gap between theory and algorithm. Moreover, a lightweight network is employed to guarantee the real-time of classification and the adversarial learning between feature generator and ensemble classifiers is utilized to further improve the generalization ability. The extensive experiments verify theoretical analysis and show that WMDD outperforms previous UDA methods on HMI recognition tasks.
Author Zhou, Xiao-Hu
Liang, Xu
Hou, Zeng-Guang
Li, Guotao
Liu, Xiao-Yin
Author_xml – sequence: 1
  givenname: Xiao-Yin
  orcidid: 0000-0001-7407-2216
  surname: Liu
  fullname: Liu, Xiao-Yin
  email: liuxiaoyin2023@ia.ac.cn
  organization: State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China
– sequence: 2
  givenname: Guotao
  orcidid: 0000-0001-9201-2700
  surname: Li
  fullname: Li, Guotao
  email: guotao.li@ia.ac.cn
  organization: State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China
– sequence: 3
  givenname: Xiao-Hu
  orcidid: 0000-0002-7602-4848
  surname: Zhou
  fullname: Zhou, Xiao-Hu
  email: xiaohu.zhou@ia.ac.cn
  organization: State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China
– sequence: 4
  givenname: Xu
  orcidid: 0000-0002-9963-3662
  surname: Liang
  fullname: Liang, Xu
  email: liangxu2013@ia.ac.cn
  organization: State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China
– sequence: 5
  givenname: Zeng-Guang
  orcidid: 0000-0002-1534-5840
  surname: Hou
  fullname: Hou, Zeng-Guang
  email: zengguang.hou@ia.ac.cn
  organization: School of Automation and Intelligence, Beijing Jiaotong University, Beijing, China
BackLink https://www.ncbi.nlm.nih.gov/pubmed/40392643$$D View this record in MEDLINE/PubMed
BookMark eNpFkctOwzAQRS0E4tkPQEIoSzYpfsROsizlVQmEhECIVTSJJ61RYxc7AfH3JLTAbObqzplZzD0g29ZZJOSY0TFjND9_mr5ejDnlciykkqlMtsg-ZyqLOU_l9p9W6R4ZhfBG-8p6K892yV5CRc5VIvZJM4le0MwXbTz5BI_xBQTU0X23bE1wna8werahW6H_MMPg0jVgbDTRsGqhNc5G99gunI5q56PbroHecD_-zLZof9QjVm5uzaCPyE4Ny4CjTT8kz9dXT9Pb-O7hZjad3MUVT1Ubay15lZci11lOgaayZJoDZlyXqlZJqRkAMFBJRbnIsBIlUg51z-UZgtTikJyt7668e-8wtEVjQoXLJVh0XSgEp4oLSaXs0dMN2pUN6mLlTQP-q_h9UQ-wNVB5F4LH-g9htBiSKIYkiiGJYpNEv3Oy3jGI-M8zSlOlmPgG4-qFXg
CODEN ITCEB8
Cites_doi 10.1109/LRA.2022.3185380
10.1109/TRO.2022.3180832
10.1109/TNSRE.2022.3200485
10.1109/TCYB.2023.3338768
10.1093/comjnl/bxt075
10.1609/aaai.v33i01.33014122
10.1109/TCYB.2022.3211925
10.1109/ICME52920.2022.9859804
10.1109/TNNLS.2021.3105868
10.1016/j.neucom.2022.12.048
10.1109/TPAMI.2023.3348528
10.1201/b12207
10.1109/CVPR.2018.00392
10.1016/j.patcog.2024.110295
10.1145/3422622
10.1007/978-3-642-34106-9_13
10.1109/TIP.2021.3065254
10.3389/frobt.2018.00014
10.1007/978-3-319-58347-1_10
10.1109/TPAMI.2024.3357847
10.1109/TCYB.2022.3223783
10.1109/TCYB.2023.3236008
10.3389/fnins.2021.645374
10.1109/TCYB.2019.2940276
10.1109/JSEN.2023.3328615
10.1109/TPAMI.2020.3036956
10.1109/CVPR46437.2021.01499
10.1109/TMRB.2020.2970222
10.3390/app11125573
10.1609/aaai.v37i5.25743
10.1016/j.patcog.2021.107943
10.1007/s10994-009-5152-4
10.1016/j.patcog.2021.108332
10.1109/TNSRE.2024.3354806
10.1126/scirobotics.adi8852
10.1109/TAI.2023.3293077
10.1109/TNSRE.2020.2966749
10.1109/TNSRE.2021.3086843
10.1016/j.inffus.2022.12.023
10.1109/tr.2024.3427813
10.1109/TBME.2017.2704085
10.1109/CVPR.2017.107
10.1016/j.ins.2024.120800
10.1109/TNNLS.2020.3009448
10.5555/2969033.2969125
10.7551/mitpress/7503.003.0022
10.1109/TASE.2023.3345919
10.1109/TII.2024.3385533
10.1109/TCYB.2022.3222564
10.1109/TCYB.2023.3253181
10.1109/JSTARS.2024.3399741
10.1109/TPAMI.2022.3217046
10.1109/JSEN.2020.3019053
10.1109/TCDS.2020.2968845
ContentType Journal Article
DBID 97E
RIA
RIE
AAYXX
CITATION
CGR
CUY
CVF
ECM
EIF
NPM
7X8
DOI 10.1109/TCYB.2025.3565754
DatabaseName IEEE Xplore (IEEE)
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
Medline
MEDLINE
MEDLINE (Ovid)
MEDLINE
MEDLINE
PubMed
MEDLINE - Academic
DatabaseTitle CrossRef
MEDLINE
Medline Complete
MEDLINE with Full Text
PubMed
MEDLINE (Ovid)
MEDLINE - Academic
DatabaseTitleList MEDLINE
MEDLINE - Academic

Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: EIF
  name: MEDLINE
  url: https://proxy.k.utb.cz/login?url=https://www.webofscience.com/wos/medline/basic-search
  sourceTypes: Index Database
– sequence: 3
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Sciences (General)
EISSN 2168-2275
EndPage 3143
ExternalDocumentID 40392643
10_1109_TCYB_2025_3565754
11007661
Genre orig-research
Journal Article
GrantInformation_xml – fundername: Beijing Natural Science Foundation
  grantid: L222053; L232021; L242101
  funderid: 10.13039/501100005089
– fundername: National Natural Science Foundation of China
  grantid: 62473365; U22A2056; 62373013
  funderid: 10.13039/501100001809
GroupedDBID 0R~
4.4
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACIWK
AENEX
AGQYO
AGSQL
AHBIQ
AKJIK
AKQYR
ALMA_UNASSIGNED_HOLDINGS
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
EBS
EJD
HZ~
IFIPE
IPLJI
JAVBF
M43
O9-
OCL
PQQKQ
RIA
RIE
RNS
AAYXX
CITATION
RIG
CGR
CUY
CVF
ECM
EIF
NPM
7X8
ID FETCH-LOGICAL-c276t-dd52c9b39d890a075b1d2ae82db6f64bd1aaa1a64c0238ec3be02af75b98ea5d3
IEDL.DBID RIE
ISSN 2168-2267
2168-2275
IngestDate Wed Jul 02 03:00:18 EDT 2025
Sat Jul 12 02:49:01 EDT 2025
Wed Jul 16 16:44:18 EDT 2025
Wed Aug 27 02:14:29 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 7
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c276t-dd52c9b39d890a075b1d2ae82db6f64bd1aaa1a64c0238ec3be02af75b98ea5d3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ORCID 0000-0002-7602-4848
0000-0001-9201-2700
0000-0001-7407-2216
0000-0002-9963-3662
0000-0002-1534-5840
PMID 40392643
PQID 3206235055
PQPubID 23479
PageCount 13
ParticipantIDs proquest_miscellaneous_3206235055
pubmed_primary_40392643
ieee_primary_11007661
crossref_primary_10_1109_TCYB_2025_3565754
PublicationCentury 2000
PublicationDate 2025-07-01
PublicationDateYYYYMMDD 2025-07-01
PublicationDate_xml – month: 07
  year: 2025
  text: 2025-07-01
  day: 01
PublicationDecade 2020
PublicationPlace United States
PublicationPlace_xml – name: United States
PublicationTitle IEEE transactions on cybernetics
PublicationTitleAbbrev TCYB
PublicationTitleAlternate IEEE Trans Cybern
PublicationYear 2025
Publisher IEEE
Publisher_xml – name: IEEE
References ref13
ref57
ref12
ref56
ref15
ref59
ref14
Long (ref28); 30
ref58
ref53
ref11
ref55
ref10
ref54
ref17
ref16
ref19
Ganin (ref34)
ref18
ref51
ref50
ref45
ref48
ref47
ref42
ref41
ref44
ref43
Zhang (ref30)
ref49
ref8
ref7
ref9
ref4
ref3
Deng (ref39); 36
ref6
ref5
ref40
ref37
ref31
ref33
Wen (ref38)
ref32
ref2
ref1
Courty (ref36); 30
Mansour (ref52) 2009
Finn (ref62)
Long (ref35)
Blitzer (ref46)
ref24
ref23
ref26
ref25
ref20
ref64
ref63
ref22
ref21
ref65
ref27
ref29
ref61
Van der Maaten (ref60) 2008; 9
References_xml – ident: ref13
  doi: 10.1109/LRA.2022.3185380
– start-page: 97
  volume-title: Proc. Int. Conf. Mach. Learn
  ident: ref35
  article-title: Learning transferable features with deep adaptation networks
– ident: ref2
  doi: 10.1109/TRO.2022.3180832
– ident: ref49
  doi: 10.1109/TNSRE.2022.3200485
– ident: ref21
  doi: 10.1109/TCYB.2023.3338768
– start-page: 1
  volume-title: Proc. Int. Conf. Learn. Represent.
  ident: ref62
  article-title: Meta-learning and universality: Deep representations and gradient descent can approximate any learning algorithm
– ident: ref59
  doi: 10.1093/comjnl/bxt075
– ident: ref53
  doi: 10.1609/aaai.v33i01.33014122
– ident: ref5
  doi: 10.1109/TCYB.2022.3211925
– ident: ref64
  doi: 10.1109/ICME52920.2022.9859804
– ident: ref41
  doi: 10.1109/TNNLS.2021.3105868
– ident: ref45
  doi: 10.1016/j.neucom.2022.12.048
– ident: ref22
  doi: 10.1109/TPAMI.2023.3348528
– start-page: 7404
  volume-title: Proc. Int. Conf. Mach. Learn.
  ident: ref30
  article-title: Bridging theory and algorithm for domain adaptation
– ident: ref56
  doi: 10.1201/b12207
– ident: ref29
  doi: 10.1109/CVPR.2018.00392
– volume: 36
  start-page: 4845
  volume-title: Proc. 37th Conf. Neural Inf. Process. Syst.
  ident: ref39
  article-title: Mixture weight estimation and model prediction in multi-source multi-target domain adaptation
– ident: ref37
  doi: 10.1016/j.patcog.2024.110295
– ident: ref57
  doi: 10.1145/3422622
– volume: 30
  start-page: 3733
  volume-title: Proc. 31st Int. Conf. Neural Inf. Process. Syst.
  ident: ref36
  article-title: Joint distribution optimal transportation for domain adaptation
– ident: ref54
  doi: 10.1007/978-3-642-34106-9_13
– ident: ref43
  doi: 10.1109/TIP.2021.3065254
– ident: ref58
  doi: 10.3389/frobt.2018.00014
– year: 2009
  ident: ref52
  article-title: Domain adaptation: Learning bounds and algorithms
  publication-title: arXiv:0902.3430
– ident: ref27
  doi: 10.1007/978-3-319-58347-1_10
– ident: ref63
  doi: 10.1109/TPAMI.2024.3357847
– ident: ref19
  doi: 10.1109/TCYB.2022.3223783
– ident: ref33
  doi: 10.1109/TCYB.2023.3236008
– ident: ref15
  doi: 10.3389/fnins.2021.645374
– ident: ref8
  doi: 10.1109/TCYB.2019.2940276
– ident: ref9
  doi: 10.1109/JSEN.2023.3328615
– ident: ref31
  doi: 10.1109/TPAMI.2020.3036956
– ident: ref32
  doi: 10.1109/CVPR46437.2021.01499
– ident: ref11
  doi: 10.1109/TMRB.2020.2970222
– ident: ref14
  doi: 10.3390/app11125573
– ident: ref44
  doi: 10.1609/aaai.v37i5.25743
– ident: ref61
  doi: 10.1016/j.patcog.2021.107943
– ident: ref51
  doi: 10.1007/s10994-009-5152-4
– ident: ref16
  doi: 10.1016/j.patcog.2021.108332
– ident: ref3
  doi: 10.1109/TNSRE.2024.3354806
– ident: ref6
  doi: 10.1126/scirobotics.adi8852
– ident: ref24
  doi: 10.1109/TAI.2023.3293077
– start-page: 1180
  volume-title: Proc. Int. Conf. Mach. Learn
  ident: ref34
  article-title: Unsupervised domain adaptation by backpropagation
– ident: ref18
  doi: 10.1109/TNSRE.2020.2966749
– volume: 30
  start-page: 1593
  volume-title: Proc. 31st Int. Conf. Neural Inf. Process. Syst.
  ident: ref28
  article-title: Learning multiple tasks with multilinear relationship networks
– ident: ref47
  doi: 10.1109/TNSRE.2021.3086843
– start-page: 10214
  volume-title: Proc. Int. Conf. Mach. Learn
  ident: ref38
  article-title: Domain aggregation networks for multi-source domain adaptation
– ident: ref17
  doi: 10.1016/j.inffus.2022.12.023
– ident: ref25
  doi: 10.1109/tr.2024.3427813
– ident: ref12
  doi: 10.1109/TBME.2017.2704085
– ident: ref40
  doi: 10.1109/CVPR.2017.107
– ident: ref42
  doi: 10.1016/j.ins.2024.120800
– ident: ref65
  doi: 10.1109/TNNLS.2020.3009448
– ident: ref55
  doi: 10.5555/2969033.2969125
– ident: ref50
  doi: 10.7551/mitpress/7503.003.0022
– ident: ref7
  doi: 10.1109/TASE.2023.3345919
– ident: ref20
  doi: 10.1109/TII.2024.3385533
– ident: ref1
  doi: 10.1109/TCYB.2022.3222564
– volume: 9
  start-page: 2579
  issue: 11
  year: 2008
  ident: ref60
  article-title: Visualizing data using t-SNE
  publication-title: J. Mach. Learn. Res.
– ident: ref4
  doi: 10.1109/TCYB.2023.3253181
– ident: ref23
  doi: 10.1109/JSTARS.2024.3399741
– ident: ref26
  doi: 10.1109/TPAMI.2022.3217046
– start-page: 1
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  ident: ref46
  article-title: Learning bounds for domain adaptation
– ident: ref48
  doi: 10.1109/JSEN.2020.3019053
– ident: ref10
  doi: 10.1109/TCDS.2020.2968845
SSID ssj0000816898
Score 2.3979516
Snippet Accurate recognition of human motion intention (HMI) is beneficial for exoskeleton robots to improve the wearing comfort level and achieve natural human-robot...
SourceID proquest
pubmed
crossref
ieee
SourceType Aggregation Database
Index Database
Publisher
StartPage 3131
SubjectTerms Accuracy
Algorithms
Brain modeling
Classification algorithms
Exoskeletons
Feature extraction
Generalization bound
Generators
human motion intention recognition
Humans
Intention
Movement - physiology
multisource unsupervised domain adaptation
Optimization
Pattern Recognition, Automated - methods
Robot sensing systems
Target recognition
Training
Unsupervised Machine Learning
Title A Weight-Aware-Based Multisource Unsupervised Domain Adaptation Method for Human Motion Intention Recognition
URI https://ieeexplore.ieee.org/document/11007661
https://www.ncbi.nlm.nih.gov/pubmed/40392643
https://www.proquest.com/docview/3206235055
Volume 55
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1JS8QwFA7qyYv7Mm5E8KBCxm5pm-O4IcJ4EAf1VF6aFESmHZwpgr_e99JWURC8BZpueVm-t36MHWkoVKGVFnmRJCLSKhWAmoUA45tCakAdgkwDw7v4ZhTdPsmnNlnd5cJYa13wme1T0_nyTZXXZCo7o_JmSUzKzjxqbk2y1pdBxTFIOO7bABsCYUXSejF9T509XDyfozYYyH5Ijj5JfDyRh-AgjsIfR5LjWPkbbrpj53qZ3XUf3ESbvPbrme7nH79qOf77j1bYUgtA-aCZMatszpZrbLVd4lN-3NahPlln4wF_dIZTMXiHNyvO8cAz3GXsNiZ_Piqn9YQ2G7pwWY3hpeQDA5PGvc-Hjp2aIyzmzlfAh44yiLuwede678KXqnKDja6vHi5uRMvOIPIgiWfCGBnkSofKpMoDRB7aNwHYNKDEvjjSxgcAH-IoJ1hg81BbL4AC-6nUgjThJlsoq9JuM24KpSyR8eowxQ0lVVECuJfEUhZJDNb02GknoGzSFOHInPLiqYwEm5Fgs1awPbZB4_zdsR3iHjvsZJrhEiK_CJS2qqdZGHgIAhEKyh7baoT9dXc3R3b-eOouW6SXNwG8e2xh9lbbfYQpM33gpucnX1Pifg
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lb9QwEB5V7aFcoIUC2_IwEgdA8jYvJ_Fx-9IC3T2gXVFO0Th2JISarLobVeqvZ8ZJikCqxM1SnMjx-PHN8wN4b7DSldFGllWWycToXCJpFhJtaCtlkHQINg3M5ul0mXy5Uld9srrPhXHO-eAzN-am9-XbpmzZVHbM5c2ylJWdHbr4Vdila92bVDyHhGe_jaghCVhkvR8zDPTx4vTHCemDkRrH7OpTzMiTBAQP0iT-61LyLCsPA05_8Vw8gfkw5C7e5Ne43ZhxefdPNcf__qc9eNxDUDHp1sw-bLn6Kez3m3wtPvSVqD8-g-uJ-O5Np3JyizdOntCVZ4XP2e2M_mJZr9sVHzf84Ky5xp-1mFhcdQ5-MfP81IKAsfDeAjHzpEHCB8771rchgKmpD2B5cb44ncqen0GWUZZupLUqKrWJtc11gIQ9TGgjdHnEqX1pYmyIiCGmScnAwJWxcUGEFfXTuUNl4-ewXTe1ewnCVlo7puM1cU5HSq6TDOk0SZWqshSdHcGnQUDFqivDUXj1JdAFC7ZgwRa9YEdwwPP8p2M_xSN4N8i0oE3EnhGsXdOuizgKCAYSGFQjeNEJ-_7tYY0cPvDVt7A7Xcwui8vP869H8IgH0oXzvoLtzU3rXhNo2Zg3fqn-Blns5cc
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=A+Weight-Aware-Based+Multisource+Unsupervised+Domain+Adaptation+Method+for+Human+Motion+Intention+Recognition&rft.jtitle=IEEE+transactions+on+cybernetics&rft.au=Liu%2C+Xiao-Yin&rft.au=Li%2C+Guotao&rft.au=Zhou%2C+Xiao-Hu&rft.au=Liang%2C+Xu&rft.date=2025-07-01&rft.issn=2168-2275&rft.eissn=2168-2275&rft.volume=PP&rft_id=info:doi/10.1109%2FTCYB.2025.3565754&rft.externalDBID=NO_FULL_TEXT
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2168-2267&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2168-2267&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2168-2267&client=summon