A rotary transformer cross-subject model for continuous estimation of finger joints kinematics and a transfer learning approach for new subjects
Surface Electromyographic (sEMG) signals are widely utilized for estimating finger kinematics continuously in human-machine interfaces (HMI), and deep learning approaches are crucial in constructing the models. At present, most models are extracted on specific subjects and do not have cross-subject...
Saved in:
Published in | Frontiers in neuroscience Vol. 18; p. 1306050 |
---|---|
Main Authors | , |
Format | Journal Article |
Language | English |
Published |
Switzerland
Frontiers Media S.A
20.03.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Surface Electromyographic (sEMG) signals are widely utilized for estimating finger kinematics continuously in human-machine interfaces (HMI), and deep learning approaches are crucial in constructing the models. At present, most models are extracted on specific subjects and do not have cross-subject generalizability. Considering the erratic nature of sEMG signals, a model trained on a specific subject cannot be directly applied to other subjects. Therefore, in this study, we proposed a cross-subject model based on the Rotary Transformer (RoFormer) to extract features of multiple subjects for continuous estimation kinematics and extend it to new subjects by adversarial transfer learning (ATL) approach.
We utilized the new subject's training data and an ATL approach to calibrate the cross-subject model. To improve the performance of the classic transformer network, we compare the impact of different position embeddings on model performance, including learnable absolute position embedding, Sinusoidal absolute position embedding, and Rotary Position Embedding (RoPE), and eventually selected RoPE. We conducted experiments on 10 randomly selected subjects from the NinaproDB2 dataset, using Pearson correlation coefficient (CC), normalized root mean square error (NRMSE), and coefficient of determination (R2) as performance metrics.
The proposed model was compared with four other models including LSTM, TCN, Transformer, and CNN-Attention. The results demonstrated that both in cross-subject and subject-specific cases the performance of RoFormer was significantly better than the other four models. Additionally, the ATL approach improves the generalization performance of the cross-subject model better than the fine-tuning (FT) transfer learning approach.
The findings indicate that the proposed RoFormer-based method with an ATL approach has the potential for practical applications in robot hand control and other HMI settings. The model's superior performance suggests its suitability for continuous estimation of finger kinematics across different subjects, addressing the limitations of subject-specific models. |
---|---|
AbstractList | Surface Electromyographic (sEMG) signals are widely utilized for estimating finger kinematics continuously in human-machine interfaces (HMI), and deep learning approaches are crucial in constructing the models. At present, most models are extracted on specific subjects and do not have cross-subject generalizability. Considering the erratic nature of sEMG signals, a model trained on a specific subject cannot be directly applied to other subjects. Therefore, in this study, we proposed a cross-subject model based on the Rotary Transformer (RoFormer) to extract features of multiple subjects for continuous estimation kinematics and extend it to new subjects by adversarial transfer learning (ATL) approach.
We utilized the new subject's training data and an ATL approach to calibrate the cross-subject model. To improve the performance of the classic transformer network, we compare the impact of different position embeddings on model performance, including learnable absolute position embedding, Sinusoidal absolute position embedding, and Rotary Position Embedding (RoPE), and eventually selected RoPE. We conducted experiments on 10 randomly selected subjects from the NinaproDB2 dataset, using Pearson correlation coefficient (CC), normalized root mean square error (NRMSE), and coefficient of determination (R2) as performance metrics.
The proposed model was compared with four other models including LSTM, TCN, Transformer, and CNN-Attention. The results demonstrated that both in cross-subject and subject-specific cases the performance of RoFormer was significantly better than the other four models. Additionally, the ATL approach improves the generalization performance of the cross-subject model better than the fine-tuning (FT) transfer learning approach.
The findings indicate that the proposed RoFormer-based method with an ATL approach has the potential for practical applications in robot hand control and other HMI settings. The model's superior performance suggests its suitability for continuous estimation of finger kinematics across different subjects, addressing the limitations of subject-specific models. IntroductionSurface Electromyographic (sEMG) signals are widely utilized for estimating finger kinematics continuously in human-machine interfaces (HMI), and deep learning approaches are crucial in constructing the models. At present, most models are extracted on specific subjects and do not have cross-subject generalizability. Considering the erratic nature of sEMG signals, a model trained on a specific subject cannot be directly applied to other subjects. Therefore, in this study, we proposed a cross-subject model based on the Rotary Transformer (RoFormer) to extract features of multiple subjects for continuous estimation kinematics and extend it to new subjects by adversarial transfer learning (ATL) approach.MethodsWe utilized the new subject’s training data and an ATL approach to calibrate the cross-subject model. To improve the performance of the classic transformer network, we compare the impact of different position embeddings on model performance, including learnable absolute position embedding, Sinusoidal absolute position embedding, and Rotary Position Embedding (RoPE), and eventually selected RoPE. We conducted experiments on 10 randomly selected subjects from the NinaproDB2 dataset, using Pearson correlation coefficient (CC), normalized root mean square error (NRMSE), and coefficient of determination (R2) as performance metrics.ResultsThe proposed model was compared with four other models including LSTM, TCN, Transformer, and CNN-Attention. The results demonstrated that both in cross-subject and subject-specific cases the performance of RoFormer was significantly better than the other four models. Additionally, the ATL approach improves the generalization performance of the cross-subject model better than the fine-tuning (FT) transfer learning approach.DiscussionThe findings indicate that the proposed RoFormer-based method with an ATL approach has the potential for practical applications in robot hand control and other HMI settings. The model’s superior performance suggests its suitability for continuous estimation of finger kinematics across different subjects, addressing the limitations of subject-specific models. Surface Electromyographic (sEMG) signals are widely utilized for estimating finger kinematics continuously in human-machine interfaces (HMI), and deep learning approaches are crucial in constructing the models. At present, most models are extracted on specific subjects and do not have cross-subject generalizability. Considering the erratic nature of sEMG signals, a model trained on a specific subject cannot be directly applied to other subjects. Therefore, in this study, we proposed a cross-subject model based on the Rotary Transformer (RoFormer) to extract features of multiple subjects for continuous estimation kinematics and extend it to new subjects by adversarial transfer learning (ATL) approach.IntroductionSurface Electromyographic (sEMG) signals are widely utilized for estimating finger kinematics continuously in human-machine interfaces (HMI), and deep learning approaches are crucial in constructing the models. At present, most models are extracted on specific subjects and do not have cross-subject generalizability. Considering the erratic nature of sEMG signals, a model trained on a specific subject cannot be directly applied to other subjects. Therefore, in this study, we proposed a cross-subject model based on the Rotary Transformer (RoFormer) to extract features of multiple subjects for continuous estimation kinematics and extend it to new subjects by adversarial transfer learning (ATL) approach.We utilized the new subject's training data and an ATL approach to calibrate the cross-subject model. To improve the performance of the classic transformer network, we compare the impact of different position embeddings on model performance, including learnable absolute position embedding, Sinusoidal absolute position embedding, and Rotary Position Embedding (RoPE), and eventually selected RoPE. We conducted experiments on 10 randomly selected subjects from the NinaproDB2 dataset, using Pearson correlation coefficient (CC), normalized root mean square error (NRMSE), and coefficient of determination (R2) as performance metrics.MethodsWe utilized the new subject's training data and an ATL approach to calibrate the cross-subject model. To improve the performance of the classic transformer network, we compare the impact of different position embeddings on model performance, including learnable absolute position embedding, Sinusoidal absolute position embedding, and Rotary Position Embedding (RoPE), and eventually selected RoPE. We conducted experiments on 10 randomly selected subjects from the NinaproDB2 dataset, using Pearson correlation coefficient (CC), normalized root mean square error (NRMSE), and coefficient of determination (R2) as performance metrics.The proposed model was compared with four other models including LSTM, TCN, Transformer, and CNN-Attention. The results demonstrated that both in cross-subject and subject-specific cases the performance of RoFormer was significantly better than the other four models. Additionally, the ATL approach improves the generalization performance of the cross-subject model better than the fine-tuning (FT) transfer learning approach.ResultsThe proposed model was compared with four other models including LSTM, TCN, Transformer, and CNN-Attention. The results demonstrated that both in cross-subject and subject-specific cases the performance of RoFormer was significantly better than the other four models. Additionally, the ATL approach improves the generalization performance of the cross-subject model better than the fine-tuning (FT) transfer learning approach.The findings indicate that the proposed RoFormer-based method with an ATL approach has the potential for practical applications in robot hand control and other HMI settings. The model's superior performance suggests its suitability for continuous estimation of finger kinematics across different subjects, addressing the limitations of subject-specific models.DiscussionThe findings indicate that the proposed RoFormer-based method with an ATL approach has the potential for practical applications in robot hand control and other HMI settings. The model's superior performance suggests its suitability for continuous estimation of finger kinematics across different subjects, addressing the limitations of subject-specific models. |
Author | Lin, Chuang He, Zheng |
AuthorAffiliation | School of Information Science and Technology, Dalian Maritime University , Dalian , China |
AuthorAffiliation_xml | – name: School of Information Science and Technology, Dalian Maritime University , Dalian , China |
Author_xml | – sequence: 1 givenname: Chuang surname: Lin fullname: Lin, Chuang – sequence: 2 givenname: Zheng surname: He fullname: He, Zheng |
BackLink | https://www.ncbi.nlm.nih.gov/pubmed/38572147$$D View this record in MEDLINE/PubMed |
BookMark | eNp9ksluFDEQhlsoiCzwAhyQj1x68N72CUURS6RIXEDiZrm9TDx024PdHZS34JHx9DRRwoGTLdf_f1XlqvPmJKbomuY1ghtChHznY4hlgyGmG0Qghww-a84Q57iljHw_eXQ_bc5L2UHIsaD4RXNKBOswot1Z8_sS5DTpfA-mrGPxKY8uA5NTKW2Z-50zExiTdQOoIWBSnEKc01yAK1MY9RRSBMkDH-K2-nYpxKmAHyG6Q8wUoKMFemVXweB0rlVvgd7vc9LmdsFG9wusycrL5rnXQ3Gv1vOi-fbxw9erz-3Nl0_XV5c3raFcTq0xtudWWso9MryD0FJskfXUYwIxYZ5pKhk0yPad7BFHTlJjGPZYcgGpIxfN9ZFrk96pfa7N5HuVdFDLQ8pbpXNtYXCqR4wyYT2sMEocktgIJrAU2veecVNZ74-s_dyPzhoXa8PDE-jTSAy3apvuFIJSdJJ2lfB2JeT0c65_q8ZQjBsGHV39bUUgIRB2HTtI3zxO9pDl70yrAB8FyxSz8w8SBNVhcdSyOOqwOGpdnGoS_5hMmJbx1oLD8D_rH_LZzjw |
CitedBy_id | crossref_primary_10_3390_s25051613 crossref_primary_10_3390_s24175631 |
Cites_doi | 10.3390/s17030458 10.1109/TNSRE.2014.2328495 10.1162/neco.1997.9.8.1735 10.1109/CVPR.2015.7298965 10.1109/jbhi.2022.3197831 10.48550/arXiv.1706.03762 10.1109/tnsre.2022.3216528 10.1109/TIM.2023.3273651 10.23919/DATE54114.2022.9774639 10.1109/jas.2021.1003865 10.1109/lra.2022.3169448 10.1016/S0079-7421(08)60536-8 10.1109/jsen.2022.3179535 10.1109/TNSRE.2021.3077413 10.1109/jsen.2019.2936171 10.1109/TKDE.2009.191 10.1145/3422622 10.1109/jbhi.2023.3234989 10.1109/tbcas.2022.3222196 10.3390/s18103226 10.1016/j.tics.2020.09.004 |
ContentType | Journal Article |
Copyright | Copyright © 2024 Lin and He. Copyright © 2024 Lin and He. 2024 Lin and He |
Copyright_xml | – notice: Copyright © 2024 Lin and He. – notice: Copyright © 2024 Lin and He. 2024 Lin and He |
DBID | AAYXX CITATION NPM 7X8 5PM DOA |
DOI | 10.3389/fnins.2024.1306050 |
DatabaseName | CrossRef PubMed MEDLINE - Academic PubMed Central (Full Participant titles) DOAJ Directory of Open Access Journals |
DatabaseTitle | CrossRef PubMed MEDLINE - Academic |
DatabaseTitleList | PubMed MEDLINE - Academic |
Database_xml | – sequence: 1 dbid: DOA name: DOAJ Directory of Open Access Journals url: https://www.doaj.org/ sourceTypes: Open Website – sequence: 2 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Anatomy & Physiology |
EISSN | 1662-453X |
ExternalDocumentID | oai_doaj_org_article_b15458df01db43e192c858298afbf56c PMC10987947 38572147 10_3389_fnins_2024_1306050 |
Genre | Journal Article |
GroupedDBID | --- 29H 2WC 53G 5GY 5VS 88I 8FE 8FH 9T4 AAFWJ AAYXX ABUWG ACGFO ACGFS ACXDI ADRAZ AEGXH AENEX AFKRA AFPKN AIAGR ALMA_UNASSIGNED_HOLDINGS AZQEC BBNVY BENPR BHPHI BPHCQ CCPQU CITATION CS3 DIK DU5 DWQXO E3Z EBS EJD EMOBN F5P FRP GNUQQ GROUPED_DOAJ GX1 HCIFZ HYE KQ8 LK8 M2P M48 M7P O5R O5S OK1 OVT P2P PGMZT PHGZM PHGZT PIMPY PQQKQ PROAC RNS RPM W2D C1A IAO IEA IHR ISR M~E NPM 7X8 PQGLB 5PM PUEGO |
ID | FETCH-LOGICAL-c469t-ccdb6d9d46f1c6700d42d1df4f230235f5a4950c1db79b161e94cc52f296804e3 |
IEDL.DBID | M48 |
ISSN | 1662-453X 1662-4548 |
IngestDate | Wed Aug 27 01:24:55 EDT 2025 Thu Aug 21 18:34:45 EDT 2025 Fri Jul 11 03:56:59 EDT 2025 Thu Jan 02 22:31:34 EST 2025 Tue Jul 01 03:02:35 EDT 2025 Thu Apr 24 23:12:43 EDT 2025 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Keywords | transfer learning sEMG finger joint angles estimation cross-subject model rotary transformer (RoFormer) |
Language | English |
License | Copyright © 2024 Lin and He. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c469t-ccdb6d9d46f1c6700d42d1df4f230235f5a4950c1db79b161e94cc52f296804e3 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 Edited by: Harikumar Rajaguru, Bannari Amman Institute of Technology (BIT), India Ajin R. Nair, Bannari Amman Institute of Technology (BIT), India Bharanidharan N., Vellore Institute of Technology, India Kalaiyarasi Mani, Bannari Amman Institute of Technology (BIT), India Reviewed by: B. Nataraj, Sri Ramakrishna Engineering College, India |
OpenAccessLink | http://journals.scholarsportal.info/openUrl.xqy?doi=10.3389/fnins.2024.1306050 |
PMID | 38572147 |
PQID | 3033007757 |
PQPubID | 23479 |
ParticipantIDs | doaj_primary_oai_doaj_org_article_b15458df01db43e192c858298afbf56c pubmedcentral_primary_oai_pubmedcentral_nih_gov_10987947 proquest_miscellaneous_3033007757 pubmed_primary_38572147 crossref_primary_10_3389_fnins_2024_1306050 crossref_citationtrail_10_3389_fnins_2024_1306050 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2024-03-20 |
PublicationDateYYYYMMDD | 2024-03-20 |
PublicationDate_xml | – month: 03 year: 2024 text: 2024-03-20 day: 20 |
PublicationDecade | 2020 |
PublicationPlace | Switzerland |
PublicationPlace_xml | – name: Switzerland |
PublicationTitle | Frontiers in neuroscience |
PublicationTitleAlternate | Front Neurosci |
PublicationYear | 2024 |
Publisher | Frontiers Media S.A |
Publisher_xml | – name: Frontiers Media S.A |
References | Shen (ref28) 2022; 22 Burrello (ref3) 2022; 14 Mish (ref22) 2019 Xiong (ref33) 2021; 8 Atzori (ref1) 2015; 23 Lin (ref17) 2022; 31 Chen (ref4) 2022; 16 Hochreiter (ref12) 1997; 9 Bai (ref2) 2018 Pan (ref24) 2010; 22 Devlin (ref6) 2018 Rahimian (ref25) 2021; 29 Kusche (ref16) 2019; 19 Xu (ref34) 2018; 18 Geng (ref8) 2022; 7 Goodfellow (ref9) 2020; 63 Du (ref7) 2017; 17 Montazerin (ref23) 2022 Yu (ref35) 2015 Ketykó (ref14) 2019 Rahimian (ref26) 2020 De Luca (ref5) 2002 Shi (ref29) 2022; 26 Su (ref30) 2021 Krishna (ref15) 2019 Liu (ref18) 2023; 72 Hung (ref13) 2018 (ref27) 1988 Hadsell (ref10) 2020; 24 Long (ref20) 2015 Long (ref19) 2023; 27 Tay (ref31) 2020 Vaswani (ref32) 2017; 7 Hill (ref11) 2016; 16 McCloskey (ref21) 1989; 24 |
References_xml | – volume: 17 start-page: 458 year: 2017 ident: ref7 article-title: Surface EMG-based inter-session gesture recognition enhanced by deep domain adaptation publication-title: Sensors doi: 10.3390/s17030458 – year: 2019 ident: ref15 – year: 2015 ident: ref35 article-title: Multi-scale context aggregation by dilated convolutions – year: 2021 ident: ref30 article-title: Roformer: Enhanced transformer with rotary position embedding – year: 2019 ident: ref22 article-title: A self regularized non-monotonic activation function – volume: 23 start-page: 73 year: 2015 ident: ref1 article-title: Characterization of a benchmark database for myoelectric movement classification publication-title: IEEE Trans. Neural Syst. Rehabil. Eng. doi: 10.1109/TNSRE.2014.2328495 – volume-title: Pulse code modulation (pcm) of voice frequencies. ITU-T Rec. G.711 year: 1988 ident: ref27 – volume: 9 start-page: 1735 year: 1997 ident: ref12 article-title: Long short-term memory publication-title: Neural Comput. doi: 10.1162/neco.1997.9.8.1735 – year: 2015 ident: ref20 article-title: Fully convolutional networks for semantic segmentation doi: 10.1109/CVPR.2015.7298965 – year: 2018 ident: ref6 article-title: Bert: Pre-training of deep bidirectional transformers for language understanding – year: 2020 ident: ref31 article-title: Long range arena: a benchmark for efficient transformers – volume: 26 start-page: 5450 year: 2022 ident: ref29 article-title: Improving the robustness and adaptability of sEMG-based pattern recognition using deep domain adaptation publication-title: IEEE J. Biomed. Health Inform. doi: 10.1109/jbhi.2022.3197831 – volume: 7 start-page: 30 year: 2017 ident: ref32 article-title: Attention is all you need publication-title: Adv. Neural Inf. Proces. Syst. doi: 10.48550/arXiv.1706.03762 – volume: 31 start-page: 87 year: 2022 ident: ref17 article-title: A BERT based method for continuous estimation of cross-subject hand kinematics from surface Electromyographic signals publication-title: IEEE Trans. Neural Syst. Rehabil. Eng. doi: 10.1109/tnsre.2022.3216528 – year: 2018 ident: ref2 article-title: An empirical evaluation of generic convolutional and recurrent networks for sequence modeling – volume: 72 start-page: 1 year: 2023 ident: ref18 article-title: A CNN-transformer hybrid recognition approach for sEMG-based dynamic gesture prediction publication-title: IEEE Trans. Instrum. Meas. doi: 10.1109/TIM.2023.3273651 – start-page: 1304 year: 2020 ident: ref26 article-title: XceptionTime: independent time-window Xceptiontime architecture for hand gesture classification – volume: 14 start-page: 1443 year: 2022 ident: ref3 article-title: Bioformers: embedding transformers for ultra-low power sEMG-based gesture recognition publication-title: IEEE Xplore doi: 10.23919/DATE54114.2022.9774639 – volume: 8 start-page: 512 year: 2021 ident: ref33 article-title: Deep learning for EMG-based human-machine interaction: a review publication-title: IEEE/CAA J. Autom. Sin. doi: 10.1109/jas.2021.1003865 – year: 2018 ident: ref13 article-title: Adversarial learning for semi-supervised semantic segmentation – volume: 7 start-page: 6297 year: 2022 ident: ref8 article-title: A CNN-attention network for continuous estimation of finger kinematics from surface electromyography publication-title: IEEE robot. Autom. Lett. doi: 10.1109/lra.2022.3169448 – volume: 24 start-page: 109 year: 1989 ident: ref21 article-title: Catastrophic interference in connectionist networks: The sequential learning problem publication-title: Psychol. Learn. Motiv. doi: 10.1016/S0079-7421(08)60536-8 – volume: 22 start-page: 13318 year: 2022 ident: ref28 article-title: Movements classification through sEMG with convolutional vision transformer and stacking ensemble learning publication-title: IEEE Sensors J. doi: 10.1109/jsen.2022.3179535 – start-page: 5115 year: 2022 ident: ref23 article-title: ViT-HGR: vision transformer-based hand gesture recognition from high density surface EMG signals – volume: 29 start-page: 1004 year: 2021 ident: ref25 article-title: FS-HGR: few-shot learning for hand gesture recognition via electromyography publication-title: IEEE Trans. Neural Syst. Rehabil. Eng. doi: 10.1109/TNSRE.2021.3077413 – volume: 19 start-page: 11687 year: 2019 ident: ref16 article-title: Combining bioimpedance and EMG measurements for reliable muscle contraction detection publication-title: IEEE Sensors J. doi: 10.1109/jsen.2019.2936171 – volume: 22 start-page: 1345 year: 2010 ident: ref24 article-title: A survey on transfer learning publication-title: IEEE Trans. Knowl. Data Eng. doi: 10.1109/TKDE.2009.191 – volume: 63 start-page: 139 year: 2020 ident: ref9 article-title: Generative adversarial networks publication-title: Commun. ACM doi: 10.1145/3422622 – volume: 27 start-page: 1914 year: 2023 ident: ref19 article-title: A transfer learning based cross-subject generic model for continuous estimation of finger joint angles from a new user publication-title: IEEE J. Biomed. Health Inform. doi: 10.1109/jbhi.2023.3234989 – volume: 16 start-page: 1204 year: 2022 ident: ref4 article-title: An extended spatial transformer convolutional neural network for gesture recognition and self-calibration based on sparse sEMG electrodes publication-title: IEEE Trans. Biomed. Circ. Sys. doi: 10.1109/tbcas.2022.3222196 – volume: 18 start-page: 3226 year: 2018 ident: ref34 article-title: Feasibility study of advanced neural networks applied to sEMG-based force estimation publication-title: Sensors doi: 10.3390/s18103226 – start-page: 1 volume-title: Surface electromyography: detection and recording year: 2002 ident: ref5 – volume: 16 start-page: 310 year: 2016 ident: ref11 article-title: Effect of sex on torque, recovery, EMG, and MMG responses to fatigue publication-title: PubMed – start-page: 1 year: 2019 ident: ref14 article-title: Domain adaptation for sEMG-based gesture recognition with recurrent neural networks – volume: 24 start-page: 1028 year: 2020 ident: ref10 article-title: Embracing change: continual learning in deep neural networks publication-title: Trends Cogn. Sci. doi: 10.1016/j.tics.2020.09.004 |
SSID | ssj0062842 |
Score | 2.3685102 |
Snippet | Surface Electromyographic (sEMG) signals are widely utilized for estimating finger kinematics continuously in human-machine interfaces (HMI), and deep learning... IntroductionSurface Electromyographic (sEMG) signals are widely utilized for estimating finger kinematics continuously in human-machine interfaces (HMI), and... |
SourceID | doaj pubmedcentral proquest pubmed crossref |
SourceType | Open Website Open Access Repository Aggregation Database Index Database Enrichment Source |
StartPage | 1306050 |
SubjectTerms | cross-subject model finger joint angles estimation Neuroscience rotary transformer (RoFormer) sEMG transfer learning |
SummonAdditionalLinks | – databaseName: DOAJ Directory of Open Access Journals dbid: DOA link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1Lb9QwELZQT1wQtDzCoxokxAVFTRw7sY9LRVUhwYlKvUV-wvLwot3sof-Cn9wZJ1l1qwounCIljmPlm4y_ccbfMPYGKTDSDlGVPkhZCuub0khlys51ptXScZt1Cj59bs8vxMdLeXmj1BflhI3ywOOLO7H5146PVe2taAISEqek4lqZaKNsHXlfnPPmYGr0wS06XT5ukcEQTJ_EtEykzc0FVT9GBl_tTUNZrf8uink7U_LG1HP2kD2YOCMsxrE-YvdCOmRHi4Tx8q8reAs5izMvjx-xPwtYrwazvoJhpqRhDXkQ5WZradUFcvUbwEtAierLtMXoH0htY9zGCKsIMa_2wffVMg0b-IFUNEu7bsAkD2bqGxtMRSe-wqxNnrtFqg7TwzaP2cXZhy-n5-VUdaF0GCoPpXPetl570cba0SYeL7ivfRSRU4EhGaXBoKpyiEenLRLGoIVzkkeuW1WJ0DxhB2mVwjMGDlFqvXV4jMIErmrsW9dC-pqjX6sKVs8g9G6SJKfKGD97DE0IuD4D1xNw_QRcwd7t7vk9CnL8tfV7wnbXksS08wk0sX4ysf5fJlaw17Nl9Pjx0R8VkwJC0-P832QRwa5gT0dL2T2qUbKjIlAFU3s2tDeW_Stp-S0LfNeVVugnu-f_Y_Qv2H16I5Q3x6uX7GBYb8MrJFKDPc7fzDWYIiB_ priority: 102 providerName: Directory of Open Access Journals |
Title | A rotary transformer cross-subject model for continuous estimation of finger joints kinematics and a transfer learning approach for new subjects |
URI | https://www.ncbi.nlm.nih.gov/pubmed/38572147 https://www.proquest.com/docview/3033007757 https://pubmed.ncbi.nlm.nih.gov/PMC10987947 https://doaj.org/article/b15458df01db43e192c858298afbf56c |
Volume | 18 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1bi9QwFA7L7osvoq6XehkiiC9SbdOklweRWdl1EXYRcWDeSq67o2uqbQecf-FP9py0HRxZxZcWmjRJe3L5Ti7fR8gzgMAAO3gSGytEzJXJYilKGRe6kHklNFOBp-DsPD9d8PdLsdwjk9zR-AO7a1071JNatFcvf3zfvIEG_xo9ThhvXzm_8si8zThqGwM-Bxf-AEamAhUNzvh2VSGHrjisfuZ4Ugig-nCI5i9p7AxUgc__OhD6517K3wank1vk5ogq6XyoBrfJnvV3yOHcg0f9dUOf07DPM0ygH5Kfc9o2vWw3tJ9Aq21pKETcrRXOy9Cgj0MhiOJW9pVfN-uOIh_HcNCRNo66MB9IPzcr33f0C4DVQP7aUekNlWPaEGGUpbigE3t5SBbAPB0z6-6Sxcnxp7en8ajLEGtwpvtYa6NyUxmeu1TjMR_DmUmN446hBJFwQoLblejUqKJSACltxbUWzLEqLxNus3tk3zfePiBUV6XMjdJwd1xaVqaQdpVyYVIGPV8SkXQyQq1H0nLUzriqwXlBw9XBcDUarh4NF5EX23e-DZQd_4x9hLbdxkS67fCgaS_qsfXWKqwvGpfAJ_HMAirWpSgZFN4pJ3IdkadTzaiheeKai_QWTFMDQsgCzWARkftDTdlmlZWiQJmoiJQ7dWinLLshfnUZKMDTpCqhJy0e_kfGj8gN_GDcOMeSx2S_b9f2CSCpXs3IwdHx-YePszATAdd3y3QWmswvF1sjRw |
linkProvider | Scholars Portal |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=A+rotary+transformer+cross-subject+model+for+continuous+estimation+of+finger+joints+kinematics+and+a+transfer+learning+approach+for+new+subjects&rft.jtitle=Frontiers+in+neuroscience&rft.au=Lin%2C+Chuang&rft.au=He%2C+Zheng&rft.date=2024-03-20&rft.issn=1662-4548&rft.volume=18&rft.spage=1306050&rft_id=info:doi/10.3389%2Ffnins.2024.1306050&rft.externalDBID=NO_FULL_TEXT |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1662-453X&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1662-453X&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1662-453X&client=summon |