Multi-Stream General and Graph-Based Deep Neural Networks for Skeleton-Based Sign Language Recognition
Sign language recognition (SLR) aims to bridge speech-impaired and general communities by recognizing signs from given videos. However, due to the complex background, light illumination, and subject structures in videos, researchers still face challenges in developing effective SLR systems. Many res...
Saved in:
Published in | Electronics (Basel) Vol. 12; no. 13; p. 2841 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
Basel
MDPI AG
01.07.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Sign language recognition (SLR) aims to bridge speech-impaired and general communities by recognizing signs from given videos. However, due to the complex background, light illumination, and subject structures in videos, researchers still face challenges in developing effective SLR systems. Many researchers have recently sought to develop skeleton-based sign language recognition systems to overcome the subject and background variation in hand gesture sign videos. However, skeleton-based SLR is still under exploration, mainly due to a lack of information and hand key point annotations. More recently, researchers have included body and face information along with hand gesture information for SLR; however, the obtained performance accuracy and generalizability properties remain unsatisfactory. In this paper, we propose a multi-stream graph-based deep neural network (SL-GDN) for a skeleton-based SLR system in order to overcome the above-mentioned problems. The main purpose of the proposed SL-GDN approach is to improve the generalizability and performance accuracy of the SLR system while maintaining a low computational cost based on the human body pose in the form of 2D landmark locations. We first construct a skeleton graph based on 27 whole-body key points selected among 67 key points to address the high computational cost problem. Then, we utilize the multi-stream SL-GDN to extract features from the whole-body skeleton graph considering four streams. Finally, we concatenate the four different features and apply a classification module to refine the features and recognize corresponding sign classes. Our data-driven graph construction method increases the system’s flexibility and brings high generalizability, allowing it to adapt to varied data. We use two large-scale benchmark SLR data sets to evaluate the proposed model: The Turkish Sign Language data set (AUTSL) and Chinese Sign Language (CSL). The reported performance accuracy results demonstrate the outstanding ability of the proposed model, and we believe that it will be considered a great innovation in the SLR domain. |
---|---|
AbstractList | Sign language recognition (SLR) aims to bridge speech-impaired and general communities by recognizing signs from given videos. However, due to the complex background, light illumination, and subject structures in videos, researchers still face challenges in developing effective SLR systems. Many researchers have recently sought to develop skeleton-based sign language recognition systems to overcome the subject and background variation in hand gesture sign videos. However, skeleton-based SLR is still under exploration, mainly due to a lack of information and hand key point annotations. More recently, researchers have included body and face information along with hand gesture information for SLR; however, the obtained performance accuracy and generalizability properties remain unsatisfactory. In this paper, we propose a multi-stream graph-based deep neural network (SL-GDN) for a skeleton-based SLR system in order to overcome the above-mentioned problems. The main purpose of the proposed SL-GDN approach is to improve the generalizability and performance accuracy of the SLR system while maintaining a low computational cost based on the human body pose in the form of 2D landmark locations. We first construct a skeleton graph based on 27 whole-body key points selected among 67 key points to address the high computational cost problem. Then, we utilize the multi-stream SL-GDN to extract features from the whole-body skeleton graph considering four streams. Finally, we concatenate the four different features and apply a classification module to refine the features and recognize corresponding sign classes. Our data-driven graph construction method increases the system’s flexibility and brings high generalizability, allowing it to adapt to varied data. We use two large-scale benchmark SLR data sets to evaluate the proposed model: The Turkish Sign Language data set (AUTSL) and Chinese Sign Language (CSL). The reported performance accuracy results demonstrate the outstanding ability of the proposed model, and we believe that it will be considered a great innovation in the SLR domain. |
Audience | Academic |
Author | Lee, Hyoun-Sup Miah, Abu Saleh Musa Shin, Jungpil Jang, Si-Woong Hasan, Md. Al Mehedi |
Author_xml | – sequence: 1 givenname: Abu Saleh Musa orcidid: 0000-0002-1238-0464 surname: Miah fullname: Miah, Abu Saleh Musa – sequence: 2 givenname: Md. Al Mehedi surname: Hasan fullname: Hasan, Md. Al Mehedi – sequence: 3 givenname: Si-Woong surname: Jang fullname: Jang, Si-Woong – sequence: 4 givenname: Hyoun-Sup surname: Lee fullname: Lee, Hyoun-Sup – sequence: 5 givenname: Jungpil orcidid: 0000-0002-7476-2468 surname: Shin fullname: Shin, Jungpil |
BookMark | eNp9kU9LAzEQxYNUUKufwEvA89b8aXY3R61ahVrB6nnJZmfX6DapSRbx25tSDyLi5DAheb958OYIjayzgNApJRPOJTmHHnT0zhodKKOclVO6hw4ZKWQmmWSjH_cDdBLCK0klKS85OUTt_dBHk62iB7XGc7DgVY-VbfDcq81LdqkCNPgKYIOXMGz_lhA_nH8LuHUer96Se3T2W7cyncULZbtBdYAfQbvOmmicPUb7reoDnHz3MXq-uX6a3WaLh_nd7GKR6SkrYlYSIXTTMCZqSWqqKCWqlVPguqwbkcs8z4UsoC6FUhSg1ByEhjZnNaGF1pSP0dlu7sa79wFCrF7d4G2yrFjJc05ETvOkmuxUneqhMrZ10SudTgNro1O6rUnvF4UoCsE5nyaA7wDtXQge2mrjzVr5z4qSaruE6o8lJEr-orSJahtHsjP9v-wXd_mS0g |
CitedBy_id | crossref_primary_10_1109_ACCESS_2023_3343404 crossref_primary_10_1177_14727978251323068 crossref_primary_10_3389_fbioe_2024_1401803 crossref_primary_10_32604_csse_2023_045981 crossref_primary_10_1109_ACCESS_2024_3399839 crossref_primary_10_1109_ACCESS_2024_3372425 crossref_primary_10_1587_elex_21_20230579 crossref_primary_10_1109_OJCS_2024_3517154 crossref_primary_10_1109_ACCESS_2024_3398806 |
Cites_doi | 10.3390/electronics9101584 10.1007/s11263-018-1121-3 10.1109/ICCITECHN.2017.8281828 10.1109/WACV48630.2021.00278 10.1109/CVPR.2019.01230 10.1109/CVPR.2019.00429 10.3390/math11081921 10.1109/CVPR.2006.119 10.1007/978-3-030-30493-5_59 10.1109/ACCESS.2020.3028072 10.1109/ICKII50300.2020.9318870 10.1109/TCSVT.2018.2870740 10.1109/TPAMI.2015.2461544 10.1007/s11263-016-0957-7 10.3390/app12083933 10.1109/CVPR.2018.00572 10.1109/MCSoC57363.2022.00014 10.3390/app13053029 10.1007/978-1-4842-4261-2 10.1109/CVPR.2018.00745 10.1007/978-3-030-76776-1_8 10.1007/s11042-019-7263-7 10.1016/j.ins.2018.02.024 10.1109/ICIEA.2010.5514688 10.1109/ICCV.1999.790410 10.1109/TIP.2020.3028207 10.1609/aaai.v32i1.11903 10.1109/WACV45572.2020.9093512 10.3390/s151127569 10.32604/csse.2023.029336 10.1109/IC4ME247184.2019.9036591 10.1109/CVPRW53098.2021.00380 10.1109/ACCESS.2023.3235368 10.1016/j.neunet.2020.01.030 10.3390/s21175856 10.1007/978-3-319-46487-9_50 10.1109/CVPR.2014.83 10.1109/ECEI53102.2022.9829482 10.4324/9781410603982 10.1007/978-3-030-58545-7_12 10.1109/ICCV.2015.515 10.1109/ACCESS.2022.3171263 10.32374/rtsre.2019.013 10.1609/aaai.v32i1.12328 10.1145/3394171.3413802 10.1088/1757-899X/928/3/032021 10.1109/ICCVW.2017.75 10.1109/CVPR.2017.387 10.1109/EUROCON.2019.8861945 10.1609/aaai.v32i1.12235 10.3390/app12115523 10.3390/computers12010013 10.1007/978-3-030-58586-0_32 10.1007/978-981-15-6648-6_13 10.1117/12.2051018 10.1109/SLT.2018.8639639 10.1109/TIM.2011.2161140 10.1109/TMM.2018.2889563 10.1109/CVPR.2019.00371 |
ContentType | Journal Article |
Copyright | COPYRIGHT 2023 MDPI AG 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. |
Copyright_xml | – notice: COPYRIGHT 2023 MDPI AG – notice: 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. |
DBID | AAYXX CITATION 7SP 8FD 8FE 8FG ABUWG AFKRA ARAPS AZQEC BENPR BGLVJ CCPQU DWQXO HCIFZ L7M P5Z P62 PHGZM PHGZT PIMPY PKEHL PQEST PQGLB PQQKQ PQUKI |
DOI | 10.3390/electronics12132841 |
DatabaseName | CrossRef Electronics & Communications Abstracts Technology Research Database ProQuest SciTech Collection ProQuest Technology Collection ProQuest Central (Alumni) ProQuest Central UK/Ireland Advanced Technologies & Aerospace Collection ProQuest Central Essentials ProQuest Central (New) Technology collection ProQuest One Community College ProQuest Central Korea SciTech Premium Collection Advanced Technologies Database with Aerospace Advanced Technologies & Aerospace Database ProQuest Advanced Technologies & Aerospace Collection ProQuest Central Premium ProQuest One Academic Publicly Available Content Database ProQuest One Academic Middle East (New) ProQuest One Academic Eastern Edition (DO NOT USE) ProQuest One Applied & Life Sciences ProQuest One Academic ProQuest One Academic UKI Edition |
DatabaseTitle | CrossRef Publicly Available Content Database Advanced Technologies & Aerospace Collection Technology Collection Technology Research Database ProQuest One Academic Middle East (New) ProQuest Advanced Technologies & Aerospace Collection ProQuest Central Essentials ProQuest One Academic Eastern Edition Electronics & Communications Abstracts ProQuest Central (Alumni Edition) SciTech Premium Collection ProQuest One Community College ProQuest Technology Collection ProQuest SciTech Collection ProQuest Central Advanced Technologies & Aerospace Database ProQuest One Applied & Life Sciences ProQuest One Academic UKI Edition ProQuest Central Korea ProQuest Central (New) ProQuest One Academic Advanced Technologies Database with Aerospace ProQuest One Academic (New) |
DatabaseTitleList | CrossRef Publicly Available Content Database |
Database_xml | – sequence: 1 dbid: 8FG name: ProQuest Technology Collection url: https://search.proquest.com/technologycollection1 sourceTypes: Aggregation Database |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering |
EISSN | 2079-9292 |
ExternalDocumentID | A757753334 10_3390_electronics12132841 |
GeographicLocations | Japan |
GeographicLocations_xml | – name: Japan |
GroupedDBID | 5VS 8FE 8FG AAYXX ADMLS AFKRA ALMA_UNASSIGNED_HOLDINGS ARAPS BENPR BGLVJ CCPQU CITATION HCIFZ IAO ITC KQ8 MODMG M~E OK1 P62 PHGZM PHGZT PIMPY PROAC PMFND 7SP 8FD ABUWG AZQEC DWQXO L7M PKEHL PQEST PQGLB PQQKQ PQUKI |
ID | FETCH-LOGICAL-c427t-8055cdd225b90b1a110af94e3c8bd569666597eb85aa1ee8c3e5cef62b017cc13 |
IEDL.DBID | BENPR |
ISSN | 2079-9292 |
IngestDate | Sun Jul 13 04:14:11 EDT 2025 Tue Jun 10 21:17:54 EDT 2025 Tue Jul 01 01:47:49 EDT 2025 Thu Apr 24 22:57:45 EDT 2025 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 13 |
Language | English |
License | https://creativecommons.org/licenses/by/4.0 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c427t-8055cdd225b90b1a110af94e3c8bd569666597eb85aa1ee8c3e5cef62b017cc13 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ORCID | 0000-0002-7476-2468 0000-0002-1238-0464 |
OpenAccessLink | https://www.proquest.com/docview/2836305616?pq-origsite=%requestingapplication% |
PQID | 2836305616 |
PQPubID | 2032404 |
ParticipantIDs | proquest_journals_2836305616 gale_infotracacademiconefile_A757753334 crossref_primary_10_3390_electronics12132841 crossref_citationtrail_10_3390_electronics12132841 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2023-07-01 |
PublicationDateYYYYMMDD | 2023-07-01 |
PublicationDate_xml | – month: 07 year: 2023 text: 2023-07-01 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | Basel |
PublicationPlace_xml | – name: Basel |
PublicationTitle | Electronics (Basel) |
PublicationYear | 2023 |
Publisher | MDPI AG |
Publisher_xml | – name: MDPI AG |
References | ref_50 ref_14 ref_58 ref_13 ref_12 ref_56 ref_11 ref_55 ref_10 ref_54 ref_53 ref_52 ref_51 Pagliari (ref_60) 2015; 15 Cui (ref_49) 2019; 21 Xiao (ref_28) 2020; 125 ref_19 ref_18 ref_17 Neverova (ref_41) 2015; 38 ref_15 ref_59 Koller (ref_43) 2018; 126 ref_61 Miah (ref_4) 2023; 44 Shi (ref_21) 2020; 29 ref_25 ref_24 ref_23 ref_67 ref_22 ref_66 ref_65 ref_20 ref_64 ref_63 ref_29 ref_27 ref_26 Sincan (ref_57) 2020; 8 ref_32 ref_30 Pigou (ref_46) 2018; 126 ref_39 ref_38 ref_37 Lowe (ref_34) 1999; Volume 2 Zobaed (ref_16) 2020; 928 Li (ref_33) 2018; 441 Dardas (ref_36) 2011; 60 Zhu (ref_35) 2006; Volume 2 ref_47 ref_45 ref_42 ref_40 ref_3 ref_2 Hirooka (ref_62) 2022; 10 ref_48 ref_9 Miah (ref_1) 2023; 11 ref_8 Lim (ref_31) 2019; 78 ref_5 ref_7 ref_6 Huang (ref_44) 2018; 29 |
References_xml | – ident: ref_14 doi: 10.3390/electronics9101584 – volume: 126 start-page: 1311 year: 2018 ident: ref_43 article-title: Deep sign: Enabling robust statistical continuous sign language recognition via hybrid CNN-HMMs publication-title: Int. J. Comput. Vis. doi: 10.1007/s11263-018-1121-3 – ident: ref_15 doi: 10.1109/ICCITECHN.2017.8281828 – ident: ref_52 doi: 10.1109/WACV48630.2021.00278 – ident: ref_51 – ident: ref_55 doi: 10.1109/CVPR.2019.01230 – ident: ref_42 doi: 10.1109/CVPR.2019.00429 – ident: ref_17 doi: 10.3390/math11081921 – volume: Volume 2 start-page: 1491 year: 2006 ident: ref_35 article-title: Fast human detection using a cascade of histograms of oriented gradients publication-title: Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06) doi: 10.1109/CVPR.2006.119 – ident: ref_56 doi: 10.1007/978-3-030-30493-5_59 – volume: 8 start-page: 181340 year: 2020 ident: ref_57 article-title: Autsl: A large scale multi-modal turkish sign language dataset and baseline methods publication-title: IEEE Access doi: 10.1109/ACCESS.2020.3028072 – ident: ref_38 doi: 10.1109/ICKII50300.2020.9318870 – ident: ref_58 – volume: 29 start-page: 2822 year: 2018 ident: ref_44 article-title: Attention-based 3D-CNNs for large-vocabulary sign language recognition publication-title: IEEE Trans. Circuits Syst. Video Technol. doi: 10.1109/TCSVT.2018.2870740 – volume: 38 start-page: 1692 year: 2015 ident: ref_41 article-title: Moddrop: Adaptive multi-modal gesture recognition publication-title: IEEE Trans. Pattern Anal. Mach. Intell. doi: 10.1109/TPAMI.2015.2461544 – volume: 126 start-page: 430 year: 2018 ident: ref_46 article-title: Beyond temporal pooling: Recurrence and temporal convolutions for gesture recognition in video publication-title: Int. J. Comput. Vis. doi: 10.1007/s11263-016-0957-7 – ident: ref_3 doi: 10.3390/app12083933 – ident: ref_54 doi: 10.1109/CVPR.2018.00572 – ident: ref_13 doi: 10.1109/MCSoC57363.2022.00014 – ident: ref_12 doi: 10.3390/app13053029 – ident: ref_10 – ident: ref_65 doi: 10.1007/978-1-4842-4261-2 – ident: ref_67 doi: 10.1109/CVPR.2018.00745 – ident: ref_6 doi: 10.1007/978-3-030-76776-1_8 – ident: ref_66 – volume: 78 start-page: 19917 year: 2019 ident: ref_31 article-title: Isolated sign language recognition using convolutional neural network hand modelling and hand energy image publication-title: Multimed. Tools Appl. doi: 10.1007/s11042-019-7263-7 – volume: 441 start-page: 66 year: 2018 ident: ref_33 article-title: Deep attention network for joint hand gesture localization and recognition using static RGB-D images publication-title: Inf. Sci. doi: 10.1016/j.ins.2018.02.024 – ident: ref_9 doi: 10.1109/ICIEA.2010.5514688 – volume: Volume 2 start-page: 1150 year: 1999 ident: ref_34 article-title: Object recognition from local scale-invariant features publication-title: Proceedings of the Seventh IEEE International Conference on Computer Vision doi: 10.1109/ICCV.1999.790410 – volume: 29 start-page: 9532 year: 2020 ident: ref_21 article-title: Skeleton-based action recognition with multi-stream adaptive graph convolutional networks publication-title: IEEE Trans. Image Process. doi: 10.1109/TIP.2020.3028207 – ident: ref_47 doi: 10.1609/aaai.v32i1.11903 – ident: ref_48 doi: 10.1109/WACV45572.2020.9093512 – ident: ref_59 – volume: 15 start-page: 27569 year: 2015 ident: ref_60 article-title: Calibration of kinect for xbox one and comparison between the two generations of microsoft sensors publication-title: Sensors doi: 10.3390/s151127569 – volume: 44 start-page: 2521 year: 2023 ident: ref_4 article-title: Rotation, Translation And Scale Invariant Sign Word Recognition Using Deep Learning publication-title: Comput. Syst. Sci. Eng. doi: 10.32604/csse.2023.029336 – ident: ref_18 doi: 10.1109/IC4ME247184.2019.9036591 – ident: ref_8 doi: 10.1109/CVPRW53098.2021.00380 – ident: ref_30 – volume: 11 start-page: 4703 year: 2023 ident: ref_1 article-title: Dynamic Hand Gesture Recognition using Multi-Branch Attention Based Graph and General Deep Learning Model publication-title: IEEE Access doi: 10.1109/ACCESS.2023.3235368 – volume: 125 start-page: 41 year: 2020 ident: ref_28 article-title: Skeleton-based Chinese sign language recognition and generation for bidirectional communication between deaf and hearing people publication-title: Neural Netw. doi: 10.1016/j.neunet.2020.01.030 – ident: ref_26 doi: 10.3390/s21175856 – ident: ref_11 – ident: ref_61 doi: 10.1007/978-3-319-46487-9_50 – ident: ref_40 doi: 10.1109/CVPR.2014.83 – ident: ref_5 doi: 10.1109/ECEI53102.2022.9829482 – ident: ref_7 doi: 10.4324/9781410603982 – ident: ref_27 doi: 10.1007/978-3-030-58545-7_12 – ident: ref_45 doi: 10.1109/ICCV.2015.515 – ident: ref_63 – volume: 10 start-page: 47051 year: 2022 ident: ref_62 article-title: Ensembled Transfer Learning Based Multichannel Attention Networks for Human Activity Recognition in Still Images publication-title: IEEE Access doi: 10.1109/ACCESS.2022.3171263 – ident: ref_64 doi: 10.32374/rtsre.2019.013 – ident: ref_24 doi: 10.1609/aaai.v32i1.12328 – ident: ref_22 doi: 10.1145/3394171.3413802 – volume: 928 start-page: 032021 year: 2020 ident: ref_16 article-title: Real time sleep onset detection from single channel EEG signal using block sample entropy publication-title: Iop Conf. Ser. Mater. Sci. Eng. doi: 10.1088/1757-899X/928/3/032021 – ident: ref_25 doi: 10.1109/ICCVW.2017.75 – ident: ref_23 doi: 10.1109/CVPR.2017.387 – ident: ref_39 doi: 10.1109/EUROCON.2019.8861945 – ident: ref_50 doi: 10.1609/aaai.v32i1.12235 – ident: ref_29 doi: 10.3390/app12115523 – ident: ref_2 doi: 10.3390/computers12010013 – ident: ref_20 doi: 10.1007/978-3-030-58586-0_32 – ident: ref_19 doi: 10.1007/978-981-15-6648-6_13 – ident: ref_37 doi: 10.1117/12.2051018 – ident: ref_32 doi: 10.1109/SLT.2018.8639639 – volume: 60 start-page: 3592 year: 2011 ident: ref_36 article-title: Real-time hand gesture detection and recognition using bag-of-features and support vector machine techniques publication-title: IEEE Trans. Instrum. Meas. doi: 10.1109/TIM.2011.2161140 – volume: 21 start-page: 1880 year: 2019 ident: ref_49 article-title: A deep neural framework for continuous sign language recognition by iterative training publication-title: IEEE Trans. Multimed. doi: 10.1109/TMM.2018.2889563 – ident: ref_53 doi: 10.1109/CVPR.2019.00371 |
SSID | ssj0000913830 |
Score | 2.3956852 |
Snippet | Sign language recognition (SLR) aims to bridge speech-impaired and general communities by recognizing signs from given videos. However, due to the complex... |
SourceID | proquest gale crossref |
SourceType | Aggregation Database Enrichment Source Index Database |
StartPage | 2841 |
SubjectTerms | Accuracy Annotations Artificial intelligence Artificial neural networks Classification Communication Computational efficiency Computing costs Datasets Deafness Feature recognition Neural networks Researchers Semantics Sign language Support vector machines System effectiveness Video |
Title | Multi-Stream General and Graph-Based Deep Neural Networks for Skeleton-Based Sign Language Recognition |
URI | https://www.proquest.com/docview/2836305616 |
Volume | 12 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV3NT8IwFH8RuOjB-Bnxg_Rg4sUFtq7bOBkUgRgkBjThtqxda4w6UPD_972tQ00M53ZL89r32dffD-Bce4GXUnOrkBF3fN9DOxiGbSdRLc7xDEllqA55PwoGT_7dVExtwW1h2ypLm5gb6nSmqEbeRDcY5OFucDX_cIg1im5XLYVGBWpogqOoCrXr29HDeFVlIdTLiLcKuCGO-X3zh11mQWhmaJ3dPy7pf8Oce5veDmzbMJF1in3dhQ2d7cHWL_DAfTD521mHrpWTd2bho1mSpaxPINTONfqnlHW1njNC4MCxUdHyvWAYqLLJK66R-IOLeZOX54wNbfGSjcu2oll2AE-928ebgWNZExyFkl6iyxFCpSnqqWy3pJugf09M29dcRTIVAaY3ASYRWkYiSVytI8W1UNoEnkTlVMrlh1DNZpk-Ama4MkKbMAy1wTRaYmzDXV-2XdNSoTF-HbxScLGykOLEbPEWY2pB0o7_kXYdLlcfzQtEjfXTL2hHYtI3_LdK7LMBXCEhV8WdUISYcnGOqzktNy22iriIf47N8frhE9gkJvmiE_cUqsvPL32G8cZSNqAS9foNqHW698NJwx6xbz4_2jM |
linkProvider | ProQuest |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1LT8MwDLYGHIAD4ikeA3IAcaGibZq2OyA0GGPAtgMPiVtp0gQhoBtsCPGn-I3YffCQJm6ck0aW4_hV-zPAlnZ9N6HiViFDbnmei3owCGpWrGzOUYakMpSH7HT91rV3diNuKvBR9sJQWWWpEzNFnfQU5cj30Az6mbvrH_SfLZoaRX9XyxEauVic6_c3DNkG-6cNvN9t120eXx21rGKqgKWQkiGqZCFUkqAcy5otnRjtX2xqnuYqlInw0f330cnWMhRx7GgdKq6F0sZ3JQqvUg7Hc8dgwuNoyakzvXnyldMhjM2Q2zm4Ea7be9-zbAaEnYa2wPllAEebgcy2NWdhpnBKWT2Xojmo6HQepn9AFS6AyTp1LfqJHT-xAqyaxWnCTgjy2jpEa5iwhtZ9RngfuNbNC8wHDN1idvmANNK04nzf5f1dytpFqpRdlEVMvXQRrv-Fm0swnvZSvQzMcGWENkEQaINBu0RPijuerDnGVoEx3gq4JeMiVQCY0xyNxwgDGeJ2NILbK7D79VE_x-_4e_sO3UhErxvPVnHRpIAUEk5WVA9EgAEe50hNtby0qHj2g-hbSFf_Xt6EydZVpx21T7vnazBFM-zzGuAqjA9fXvU6ejpDuZGJF4Pb_5bnT2wsE8Q |
linkToPdf | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV3dT9swED91RZq2h2mwTcBg88PQXhY1ieM4eZhQu7ajfFQVDIm3EDs2moC0Wzsh_jX-Ou4ahw6p4o1nO9bp_Mt92OffAXwxYRwWVNwqVMK9KArRDkqZern2OUcMKW3pHPJoGO-dRvtn4qwBd_VbGCqrrG3i3FAXY01n5C10g_E83I1b1pVFjLr93ckfjzpI0U1r3U6jgsiBub3B9G36fdDFvd4Jw37v1489z3UY8DRKNUPzLIQuCsS0Sn0V5OgLc5tGhutEFSLGVCDGgNuoROR5YEyiuRHa2DhUCGStA47rvoAVSVlRE1Y6veHo-OGEhxg3E-5XVEecp35r0dlmSkxq6BmCR-5wuVOYe7r-W3jjQlTWrjC1Cg1TrsHr_4gL34Gdv9v16Eo7v2aOuprlZcF-EgG210HfWLCuMRNG7B84NqzKzacMg2R2cokyUu_iat7J74uSHbqDU3ZclzSNy_dw-iz6_ADNclyadWCWayuMlVIaiym8wriKB5FKA-traW20AWGtuEw7OnPqqnGVYVpD2s6WaHsDvj18NKnYPJ6e_pV2JKN_HdfWuXuygBISa1bWlkJiusc5SrNVb1rmjMA0W0B28-nhz_ASsZwdDoYHH-EVNbSvCoK3oDn7-89sY9gzU58cvhicPzek7wF_ixlW |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Multi-Stream+General+and+Graph-Based+Deep+Neural+Networks+for+Skeleton-Based+Sign+Language+Recognition&rft.jtitle=Electronics+%28Basel%29&rft.au=Abu+Saleh+Musa+Miah&rft.au=Md+Al+Mehedi+Hasan&rft.au=Si-Woong+Jang&rft.au=Lee%2C+Hyoun-Sup&rft.date=2023-07-01&rft.pub=MDPI+AG&rft.eissn=2079-9292&rft.volume=12&rft.issue=13&rft.spage=2841&rft_id=info:doi/10.3390%2Felectronics12132841&rft.externalDBID=HAS_PDF_LINK |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2079-9292&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2079-9292&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2079-9292&client=summon |