Intelligent real-time Arabic sign language classification using attention-based inception and BiLSTM
•Bio-Inspired novel attention-based inception architecture is proposed that can adapt to different types of spatial contexts using convolution filters of different sizes. The characteristics of each dataset are unique, hence the attention mechanism helps focus on those features to improve classifica...
Saved in:
Published in | Computers & electrical engineering Vol. 95; p. 107395 |
---|---|
Main Authors | , , , , , , , |
Format | Journal Article |
Language | English |
Published |
Amsterdam
Elsevier Ltd
01.10.2021
Elsevier BV |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | •Bio-Inspired novel attention-based inception architecture is proposed that can adapt to different types of spatial contexts using convolution filters of different sizes. The characteristics of each dataset are unique, hence the attention mechanism helps focus on those features to improve classification performance.•The shallow inception model is designed with a two-layer attention mechanism with fewer layers but with a large number of convolution filters that can address the overfitting problem caused by small dataset sizes.•LSTM-based recurrent neural network (RNN) module is proposed to extract temporal features after the inception module is applied.•The proposed model is lightweight with fewer parameters and has less processing time.•The proposed model achieves good performance for both dynamic and static signs and gestures.
Bio-inspired deep learning models have revolutionized sign language classification, achieving extraordinary accuracy and human-like video understanding. Recognition and classification of sign language videos in real-time are challenging because the duration and speed of each sign vary for different subjects, the background of videos is dynamic in most cases, and the classification result should be produced in real-time. This study proposes a model based on a convolution neural network (CNN) Inception model with an attention mechanism for extracting spatial features and Bi-LSTM (long short-term memory) for temporal feature extraction. The proposed model is tested on datasets with highly variable characteristics such as different clothing, variable lighting, and variable distance from the camera. Real-time classification achieves significant early detections while achieving performance comparable to the offline operation. The proposed model has fewer parameters, fewer deep learning layers, and requires significantly less processing time than state-of-the-art models.
The Inception model with an attention mechanism with two attention blocks [Display omitted] |
---|---|
AbstractList | •Bio-Inspired novel attention-based inception architecture is proposed that can adapt to different types of spatial contexts using convolution filters of different sizes. The characteristics of each dataset are unique, hence the attention mechanism helps focus on those features to improve classification performance.•The shallow inception model is designed with a two-layer attention mechanism with fewer layers but with a large number of convolution filters that can address the overfitting problem caused by small dataset sizes.•LSTM-based recurrent neural network (RNN) module is proposed to extract temporal features after the inception module is applied.•The proposed model is lightweight with fewer parameters and has less processing time.•The proposed model achieves good performance for both dynamic and static signs and gestures.
Bio-inspired deep learning models have revolutionized sign language classification, achieving extraordinary accuracy and human-like video understanding. Recognition and classification of sign language videos in real-time are challenging because the duration and speed of each sign vary for different subjects, the background of videos is dynamic in most cases, and the classification result should be produced in real-time. This study proposes a model based on a convolution neural network (CNN) Inception model with an attention mechanism for extracting spatial features and Bi-LSTM (long short-term memory) for temporal feature extraction. The proposed model is tested on datasets with highly variable characteristics such as different clothing, variable lighting, and variable distance from the camera. Real-time classification achieves significant early detections while achieving performance comparable to the offline operation. The proposed model has fewer parameters, fewer deep learning layers, and requires significantly less processing time than state-of-the-art models.
The Inception model with an attention mechanism with two attention blocks [Display omitted] Bio-inspired deep learning models have revolutionized sign language classification, achieving extraordinary accuracy and human-like video understanding. Recognition and classification of sign language videos in real-time are challenging because the duration and speed of each sign vary for different subjects, the background of videos is dynamic in most cases, and the classification result should be produced in real-time. This study proposes a model based on a convolution neural network (CNN) Inception model with an attention mechanism for extracting spatial features and Bi-LSTM (long short-term memory) for temporal feature extraction. The proposed model is tested on datasets with highly variable characteristics such as different clothing, variable lighting, and variable distance from the camera. Real-time classification achieves significant early detections while achieving performance comparable to the offline operation. The proposed model has fewer parameters, fewer deep learning layers, and requires significantly less processing time than state-of-the-art models. |
ArticleNumber | 107395 |
Author | Muhammad, Ghulam Bencherif, Mohamed A. Alsulaiman, Mansour Amin, Syed Umar Ghaleb, Hamid Abdul, Wadood Albogamy, Fahad R. Faisal, Mohammed |
Author_xml | – sequence: 1 givenname: Wadood surname: Abdul fullname: Abdul, Wadood organization: Computer Engineering Department, College of Computer and Information Sciences, King Saud University, Riyadh 11543, Saudi Arabia – sequence: 2 givenname: Mansour surname: Alsulaiman fullname: Alsulaiman, Mansour email: msuliman@ksu.edu.sa organization: Computer Engineering Department, College of Computer and Information Sciences, King Saud University, Riyadh 11543, Saudi Arabia – sequence: 3 givenname: Syed Umar surname: Amin fullname: Amin, Syed Umar email: samin@psu.edu.sa organization: Department of Computer Science, Prince Sultan University, Riyadh 11586, Saudi Arabia – sequence: 4 givenname: Mohammed surname: Faisal fullname: Faisal, Mohammed organization: College of Applied Computer Sciences, King Saud University, Saudi Arabia – sequence: 5 givenname: Ghulam surname: Muhammad fullname: Muhammad, Ghulam organization: Computer Engineering Department, College of Computer and Information Sciences, King Saud University, Riyadh 11543, Saudi Arabia – sequence: 6 givenname: Fahad R. surname: Albogamy fullname: Albogamy, Fahad R. organization: Turabah University College, Computer Sciences Program, Taif University, Taif 21944, Saudi Arabia – sequence: 7 givenname: Mohamed A. surname: Bencherif fullname: Bencherif, Mohamed A. organization: Centre of Smart Robotics Research (CS2R), King Saud University, Riyadh 11543, Saudi Arabia – sequence: 8 givenname: Hamid surname: Ghaleb fullname: Ghaleb, Hamid organization: Software Engineering Department, College of Computer and Information Sciences, King Saud University, Riyadh 11543, Saudi Arabia |
BookMark | eNqNkEFP3DAQha0KJBbKf3DFOVvbiRPnhGAFLdJWPRTOlnc8iWaVdRbbi8S_r2E5VD1xGr2ZeW803zk7CXNAxr5JsZRCtt-3S5h3e5wQMIxLJZQs_a7u9Re2kKbrK9FpfcIWQjS66nrRnrHzlLai6FaaBfMPIeM00Ygh84huqjLtkN9EtyHgicbAJxfGgxuRw-RSooHAZZoDPyQKI3c5F2vR1cYl9JwC4P597oLnt7T-8_jrKzsd3JTw8qNesKf7u8fVz2r9-8fD6mZdQSN1rqA3vWlbrx3UMPSoYKMGA015w3sN0tTKCdMao9qh8UJtZKeNMdp1Ste6g_qCXR1z93F-PmDKdjsfYignrdJ9U-gYU5et6-MWxDmliIMFyu8v5ehoslLYN7R2a_9Ba9_Q2iPaktD_l7CPtHPx9VPe1dGLBcQLYbQJCAs0TxEhWz_TJ1L-AizCnbk |
CitedBy_id | crossref_primary_10_1109_TIM_2022_3164167 crossref_primary_10_1007_s10209_024_01162_7 crossref_primary_10_1109_ACCESS_2024_3405341 crossref_primary_10_1007_s00138_024_01557_9 crossref_primary_10_20965_jaciii_2024_p0265 crossref_primary_10_3390_jimaging8070192 crossref_primary_10_3390_s24237798 crossref_primary_10_3390_s24103112 crossref_primary_10_1016_j_eswa_2023_119772 crossref_primary_10_1016_j_compeleceng_2024_109854 crossref_primary_10_1016_j_compeleceng_2024_109475 crossref_primary_10_1016_j_compeleceng_2022_107873 crossref_primary_10_32604_cmes_2023_045731 crossref_primary_10_1109_ACCESS_2024_3485131 crossref_primary_10_1109_ACCESS_2023_3332250 crossref_primary_10_1007_s13755_023_00256_5 crossref_primary_10_7717_peerj_cs_2063 crossref_primary_10_3390_computers13060153 crossref_primary_10_1145_3584984 crossref_primary_10_1016_j_compeleceng_2024_110020 crossref_primary_10_1109_ACCESS_2023_3337514 crossref_primary_10_1007_s00521_023_08319_0 crossref_primary_10_1007_s11554_024_01435_7 crossref_primary_10_3389_fenrg_2023_1239542 crossref_primary_10_1016_j_engappai_2024_108761 crossref_primary_10_1016_j_qsa_2024_100225 crossref_primary_10_1155_2022_8367155 crossref_primary_10_4236_jcc_2023_1110008 crossref_primary_10_1007_s00521_023_09011_z crossref_primary_10_1155_2022_2721618 crossref_primary_10_1515_bmt_2023_0245 crossref_primary_10_1007_s12530_023_09512_1 crossref_primary_10_1007_s41939_024_00513_4 crossref_primary_10_2478_jsiot_2024_0006 crossref_primary_10_3390_s23167156 |
Cites_doi | 10.1109/TIM.2014.2351331 10.1016/j.future.2019.06.027 10.1109/TNSRE.2019.2896269 10.1007/s11263-015-0846-5 10.1109/TPAMI.2002.1023803 10.1016/j.eswa.2020.113794 10.1109/TSMCB.2006.889630 10.1109/ACCESS.2020.2990434 10.1109/TSMCA.2011.2116004 10.1016/j.jvcir.2016.07.020 |
ContentType | Journal Article |
Copyright | 2021 Copyright Elsevier BV Oct 2021 |
Copyright_xml | – notice: 2021 – notice: Copyright Elsevier BV Oct 2021 |
DBID | AAYXX CITATION 7SC 7SP 8FD JQ2 L7M L~C L~D |
DOI | 10.1016/j.compeleceng.2021.107395 |
DatabaseName | CrossRef Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
DatabaseTitle | CrossRef Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional |
DatabaseTitleList | Technology Research Database |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering |
EISSN | 1879-0755 |
ExternalDocumentID | 10_1016_j_compeleceng_2021_107395 S0045790621003621 |
GroupedDBID | --K --M .DC .~1 0R~ 1B1 1~. 1~5 29F 4.4 457 4G. 5GY 5VS 7-5 71M 8P~ 9JN AACTN AAEDT AAEDW AAIKJ AAKOC AALRI AAOAW AAQFI AAQXK AATTM AAXKI AAXUO AAYFN ABBOA ABEFU ABFNM ABJNI ABMAC ABWVN ABXDB ACDAQ ACGFO ACGFS ACNNM ACRLP ACRPL ACZNC ADBBV ADEZE ADJOM ADMUD ADNMO ADTZH AEBSH AECPX AEIPS AEKER AENEX AFFNX AFJKZ AFTJW AGHFR AGUBO AGYEJ AHHHB AHJVU AHZHX AIALX AIEXJ AIKHN AITUG AKRWK ALMA_UNASSIGNED_HOLDINGS AMRAJ ANKPU AOUOD ASPBG AVWKF AXJTR AZFZN BJAXD BKOJK BLXMC BNPGV CS3 DU5 EBS EFJIC EJD EO8 EO9 EP2 EP3 FDB FEDTE FGOYB FIRID FNPLU FYGXN G-2 G-Q GBLVA GBOLZ HLZ HVGLF HZ~ IHE J1W JJJVA KOM LG9 LY7 M41 MO0 N9A O-L O9- OAUVE OZT P-8 P-9 P2P PC. PQQKQ Q38 R2- RIG ROL RPZ RXW SBC SDF SDG SDP SES SET SEW SPC SPCBC SSH SST SSV SSZ T5K TAE TN5 UHS VOH WH7 WUQ XPP ZMT ~G- ~S- AAYWO AAYXX ACVFH ADCNI AEUPX AFPUW AFXIZ AGCQF AGQPQ AGRNS AIGII AIIUN AKBMS AKYEP APXCP CITATION 7SC 7SP 8FD EFKBS JQ2 L7M L~C L~D |
ID | FETCH-LOGICAL-c415t-c989866d5ac3cf9e2cb2f8c4879dd5c1832a0868826f4d02b1758885a725357c3 |
IEDL.DBID | .~1 |
ISSN | 0045-7906 |
IngestDate | Fri Jul 25 03:09:15 EDT 2025 Tue Jul 01 01:45:53 EDT 2025 Thu Apr 24 23:07:14 EDT 2025 Sun Apr 06 06:54:07 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Keywords | ICMEW CNN ICIP LSTM DL CVPR BiLSTM ICCVW Deep learning Sign language ECCV SGD Real-time classification Abbreviations: ArSL RNN Inception ASL Bio-inspired computing ICPR |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c415t-c989866d5ac3cf9e2cb2f8c4879dd5c1832a0868826f4d02b1758885a725357c3 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
PQID | 2594202883 |
PQPubID | 2045266 |
ParticipantIDs | proquest_journals_2594202883 crossref_citationtrail_10_1016_j_compeleceng_2021_107395 crossref_primary_10_1016_j_compeleceng_2021_107395 elsevier_sciencedirect_doi_10_1016_j_compeleceng_2021_107395 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | October 2021 2021-10-00 20211001 |
PublicationDateYYYYMMDD | 2021-10-01 |
PublicationDate_xml | – month: 10 year: 2021 text: October 2021 |
PublicationDecade | 2020 |
PublicationPlace | Amsterdam |
PublicationPlace_xml | – name: Amsterdam |
PublicationTitle | Computers & electrical engineering |
PublicationYear | 2021 |
Publisher | Elsevier Ltd Elsevier BV |
Publisher_xml | – name: Elsevier Ltd – name: Elsevier BV |
References | Chai, Liu, Yin, Liu, Chen (bib0008) 2016 Molchanov, Yang, Gupta, Kim, Tyree, Kautz (bib0018) 2016 Wang, Oneata, Verbeek, Schmid (bib0025) 2016; 119(3) Molchanov, Gupta, Kim, Kautz (bib0015) 2015 Xu, Xiang, Yun, Lantz, Kongqiao, Jihai (bib0006) 2011; 41 Cote-Allard (bib0007) 2019; 27 Amin, Muhammad, Abdul, Bencherif, Alsulaiman (bib0002) 2020; 2020 Zhou, Andonian, Oliva, Torralba (bib0021) 2018 Yang, Ahuja, Tabb (bib0009) 2002; 24 Rastgoo, Kiani, Escalera (bib0001) 2021; 164 Amin, Alsulaiman, Muhammad, Mekhtiche, Shamim Hossain (bib0003) 2019; 101 Lim, Tan, Tan (bib0010) 2016; 40 X. Chen and K. Gao, ‘DenseImage network: video spatial-temporal evolution encoding and understanding,’ arXiv Prepr. arXiv:1805.07550, 2018. Z. Huang, W. Xu, and K. Yu, 2015, Bidirectional LSTM-CRF models for sequence tagging. arXiv preprint arXiv:1508.01991. Szegedy, Vanhoucke, Ioffe, Shlens, Wojna (bib0019) 2016 Abid, Petriu, Amjadian (bib0013) 2015; 64 Simonyan, Zisserman (bib0024) 2014 Kopuklu, Kose, Rigoll (bib0014) 2018 Assaleh, Shanableh, Fanaswala, Amin, Bajaj (bib0012) 2010; 02 Materzynska, Berger, Bax, Memisevic (bib0017) 2019 Karpathy, Toderici, Shetty, Leung, Sukthankar, Fei-Fei (bib0005) 2014 Al-Hammadi, Muhammad, Abdul, Alsulaiman, Bencherif, Mekhtiche (bib0004) 2020; 8 Yang, Li, Qiao, Wang, Li, Dou (bib0022) 2018 Shanableh, Assaleh, Al-Rousan (bib0011) 2007; 37 Wang, Xiong, Wang, Qiao, Lin, Tang, Van Gool (bib0023) 2016 Kopuklu (10.1016/j.compeleceng.2021.107395_bib0014) 2018 Shanableh (10.1016/j.compeleceng.2021.107395_bib0011) 2007; 37 Karpathy (10.1016/j.compeleceng.2021.107395_bib0005) 2014 Materzynska (10.1016/j.compeleceng.2021.107395_bib0017) 2019 Amin (10.1016/j.compeleceng.2021.107395_bib0003) 2019; 101 Xu (10.1016/j.compeleceng.2021.107395_bib0006) 2011; 41 Molchanov (10.1016/j.compeleceng.2021.107395_bib0015) 2015 Lim (10.1016/j.compeleceng.2021.107395_bib0010) 2016; 40 Assaleh (10.1016/j.compeleceng.2021.107395_bib0012) 2010; 02 Yang (10.1016/j.compeleceng.2021.107395_bib0009) 2002; 24 Chai (10.1016/j.compeleceng.2021.107395_bib0008) 2016 Al-Hammadi (10.1016/j.compeleceng.2021.107395_bib0004) 2020; 8 Szegedy (10.1016/j.compeleceng.2021.107395_bib0019) 2016 Yang (10.1016/j.compeleceng.2021.107395_bib0022) 2018 Molchanov (10.1016/j.compeleceng.2021.107395_bib0018) 2016 Abid (10.1016/j.compeleceng.2021.107395_bib0013) 2015; 64 Wang (10.1016/j.compeleceng.2021.107395_bib0025) 2016; 119(3) 10.1016/j.compeleceng.2021.107395_bib0020 Simonyan (10.1016/j.compeleceng.2021.107395_bib0024) 2014 Amin (10.1016/j.compeleceng.2021.107395_bib0002) 2020; 2020 Wang (10.1016/j.compeleceng.2021.107395_bib0023) 2016 Cote-Allard (10.1016/j.compeleceng.2021.107395_bib0007) 2019; 27 10.1016/j.compeleceng.2021.107395_bib0016 Rastgoo (10.1016/j.compeleceng.2021.107395_bib0001) 2021; 164 Zhou (10.1016/j.compeleceng.2021.107395_bib0021) 2018 |
References_xml | – volume: 164 year: 2021 ident: bib0001 article-title: Sign language recognition: a deep survey publication-title: Expert Syst Appl – start-page: 4207 year: 2016 end-page: 4215 ident: bib0018 article-title: Online detection and classification of dynamic hand gestures with recurrent 3D convolutional neural networks publication-title: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR) – year: 2018 ident: bib0014 article-title: Motion fused frames: data level fusion strategy for hand gesture recognition publication-title: Proceedings of the IEEE conference on computer vision and pattern recognition – volume: 37 start-page: 641 year: 2007 end-page: 650 ident: bib0011 article-title: Spatio-temporal feature-extraction techniques for isolated gesture recognition in Arabic sign language publication-title: IEEE Trans Syst Man Cybern B Cybern – start-page: 831 year: 2018 end-page: 846 ident: bib0021 article-title: Temporal relational reasoning in videos publication-title: Proceedings of the lecture notes in computer science, European conference on computer vision (ECCV) – year: 2014 ident: bib0024 article-title: Two-stream convolutional networks for action recognition publication-title: Proceedings of the 27th international conference on neural information processing systems NIPS – start-page: 2818 year: 2016 end-page: 2826 ident: bib0019 article-title: Rethinking the inception architecture for computer vision publication-title: Proceedings of the IEEE conference on computer vision and pattern recognition – volume: 101 start-page: 542 year: 2019 end-page: 554 ident: bib0003 article-title: Deep learning for EEG motor imagery classification based on multi-layer CNNs feature fusion publication-title: Future Gener Comput Syst – start-page: 20 year: 2016 end-page: 36 ident: bib0023 article-title: Temporal segment networks: towards good practices for deep action recognition publication-title: Proceedings of the lecture notes in computer science European conference on computer vision – start-page: 2874 year: 2019 end-page: 2882 ident: bib0017 article-title: The jester dataset: a large-scale video dataset of human gestures publication-title: Proceedings of the IEEE/CVF international conference on computer vision workshop (ICCVW) – volume: 02 start-page: 19 year: 2010 end-page: 27 ident: bib0012 article-title: Continuous Arabic sign language recognition in user dependent mode publication-title: J Intell Learn Syst Appl – year: 2015 ident: bib0015 article-title: Hand gesture recognition with 3D convolutional neural networks publication-title: IEEE conference on computer vision and pattern recognition Workshops (CVPRW) – volume: 24 start-page: 1061 year: 2002 end-page: 1074 ident: bib0009 article-title: Extraction of 2D motion trajectories and its application to hand gesture recognition publication-title: IEEE Trans Pattern Anal Mach Intell – year: 2014 ident: bib0005 article-title: Large-scale video classification with convolutional neural networks publication-title: Proceedings of the IEEE conference on computer vision and pattern recognition – volume: 2020 start-page: 1 year: 2020 end-page: 6 ident: bib0002 article-title: Multi-CNN Feature Fusion for Efficient EEG Classification publication-title: Proceedings of the IEEE international conference on multimedia & expo workshops (ICMEW), London, United Kingdom – reference: X. Chen and K. Gao, ‘DenseImage network: video spatial-temporal evolution encoding and understanding,’ arXiv Prepr. arXiv:1805.07550, 2018. – volume: 41 start-page: 1064 year: 2011 end-page: 1076 ident: bib0006 article-title: A framework for hand gesture recognition based on accelerometer and EMG sensors publication-title: IEEE Trans Syst Man Cybern A Syst Hum – reference: Z. Huang, W. Xu, and K. Yu, 2015, Bidirectional LSTM-CRF models for sequence tagging. arXiv preprint arXiv:1508.01991. – start-page: 3104 year: 2018 end-page: 3108 ident: bib0022 article-title: Temporal Pyramid relation network for video-based gesture recognition publication-title: Proceedings of the 25th IEEE international conference on image processing (ICIP) – volume: 8 start-page: 79491 year: 2020 end-page: 79509 ident: bib0004 article-title: Hand gesture recognition for sign language using 3DCNN publication-title: IEEE Access – volume: 40 start-page: 538 year: 2016 end-page: 545 ident: bib0010 article-title: Block-based histogram of optical flow for isolated sign language recognition publication-title: J Vis Commun Image Represent – volume: 64 start-page: 596 year: 2015 end-page: 605 ident: bib0013 article-title: Dynamic sign language recognition for smart home interactive application using stochastic linear formal grammar publication-title: IEEE Trans Instrum Meas – start-page: 31 year: 2016 end-page: 36 ident: bib0008 article-title: Two streams recurrent neural networks for large-scale continuous gesture recognition publication-title: Proceedings of the 23rd international conference on pattern recognition (ICPR) – volume: 27 start-page: 760 year: 2019 end-page: 771 ident: bib0007 article-title: Deep learning for electromyographic hand gesture signal classification using transfer learning publication-title: IEEE Trans Neural Syst Rehabil Eng – volume: 119(3) start-page: 219 year: 2016 end-page: 238 ident: bib0025 article-title: A robust and efficient video representation for action recognition publication-title: IJCV – volume: 64 start-page: 596 issue: 3 year: 2015 ident: 10.1016/j.compeleceng.2021.107395_bib0013 article-title: Dynamic sign language recognition for smart home interactive application using stochastic linear formal grammar publication-title: IEEE Trans Instrum Meas doi: 10.1109/TIM.2014.2351331 – volume: 101 start-page: 542 year: 2019 ident: 10.1016/j.compeleceng.2021.107395_bib0003 article-title: Deep learning for EEG motor imagery classification based on multi-layer CNNs feature fusion publication-title: Future Gener Comput Syst doi: 10.1016/j.future.2019.06.027 – start-page: 31 year: 2016 ident: 10.1016/j.compeleceng.2021.107395_bib0008 article-title: Two streams recurrent neural networks for large-scale continuous gesture recognition – start-page: 2818 year: 2016 ident: 10.1016/j.compeleceng.2021.107395_bib0019 article-title: Rethinking the inception architecture for computer vision – volume: 27 start-page: 760 issue: 4 year: 2019 ident: 10.1016/j.compeleceng.2021.107395_bib0007 article-title: Deep learning for electromyographic hand gesture signal classification using transfer learning publication-title: IEEE Trans Neural Syst Rehabil Eng doi: 10.1109/TNSRE.2019.2896269 – year: 2015 ident: 10.1016/j.compeleceng.2021.107395_bib0015 article-title: Hand gesture recognition with 3D convolutional neural networks – volume: 2020 start-page: 1 year: 2020 ident: 10.1016/j.compeleceng.2021.107395_bib0002 article-title: Multi-CNN Feature Fusion for Efficient EEG Classification – ident: 10.1016/j.compeleceng.2021.107395_bib0020 – volume: 119(3) start-page: 219 year: 2016 ident: 10.1016/j.compeleceng.2021.107395_bib0025 article-title: A robust and efficient video representation for action recognition publication-title: IJCV doi: 10.1007/s11263-015-0846-5 – volume: 02 start-page: 19 issue: 1 year: 2010 ident: 10.1016/j.compeleceng.2021.107395_bib0012 article-title: Continuous Arabic sign language recognition in user dependent mode publication-title: J Intell Learn Syst Appl – volume: 24 start-page: 1061 issue: 8 year: 2002 ident: 10.1016/j.compeleceng.2021.107395_bib0009 article-title: Extraction of 2D motion trajectories and its application to hand gesture recognition publication-title: IEEE Trans Pattern Anal Mach Intell doi: 10.1109/TPAMI.2002.1023803 – year: 2018 ident: 10.1016/j.compeleceng.2021.107395_bib0014 article-title: Motion fused frames: data level fusion strategy for hand gesture recognition – start-page: 3104 year: 2018 ident: 10.1016/j.compeleceng.2021.107395_bib0022 article-title: Temporal Pyramid relation network for video-based gesture recognition – volume: 164 year: 2021 ident: 10.1016/j.compeleceng.2021.107395_bib0001 article-title: Sign language recognition: a deep survey publication-title: Expert Syst Appl doi: 10.1016/j.eswa.2020.113794 – start-page: 831 year: 2018 ident: 10.1016/j.compeleceng.2021.107395_bib0021 article-title: Temporal relational reasoning in videos – volume: 37 start-page: 641 issue: 3 year: 2007 ident: 10.1016/j.compeleceng.2021.107395_bib0011 article-title: Spatio-temporal feature-extraction techniques for isolated gesture recognition in Arabic sign language publication-title: IEEE Trans Syst Man Cybern B Cybern doi: 10.1109/TSMCB.2006.889630 – year: 2014 ident: 10.1016/j.compeleceng.2021.107395_bib0024 article-title: Two-stream convolutional networks for action recognition – start-page: 4207 year: 2016 ident: 10.1016/j.compeleceng.2021.107395_bib0018 article-title: Online detection and classification of dynamic hand gestures with recurrent 3D convolutional neural networks – start-page: 20 year: 2016 ident: 10.1016/j.compeleceng.2021.107395_bib0023 article-title: Temporal segment networks: towards good practices for deep action recognition – ident: 10.1016/j.compeleceng.2021.107395_bib0016 – volume: 8 start-page: 79491 year: 2020 ident: 10.1016/j.compeleceng.2021.107395_bib0004 article-title: Hand gesture recognition for sign language using 3DCNN publication-title: IEEE Access doi: 10.1109/ACCESS.2020.2990434 – year: 2014 ident: 10.1016/j.compeleceng.2021.107395_bib0005 article-title: Large-scale video classification with convolutional neural networks – start-page: 2874 year: 2019 ident: 10.1016/j.compeleceng.2021.107395_bib0017 article-title: The jester dataset: a large-scale video dataset of human gestures – volume: 41 start-page: 1064 issue: 6 year: 2011 ident: 10.1016/j.compeleceng.2021.107395_bib0006 article-title: A framework for hand gesture recognition based on accelerometer and EMG sensors publication-title: IEEE Trans Syst Man Cybern A Syst Hum doi: 10.1109/TSMCA.2011.2116004 – volume: 40 start-page: 538 year: 2016 ident: 10.1016/j.compeleceng.2021.107395_bib0010 article-title: Block-based histogram of optical flow for isolated sign language recognition publication-title: J Vis Commun Image Represent doi: 10.1016/j.jvcir.2016.07.020 |
SSID | ssj0004618 |
Score | 2.435146 |
Snippet | •Bio-Inspired novel attention-based inception architecture is proposed that can adapt to different types of spatial contexts using convolution filters of... Bio-inspired deep learning models have revolutionized sign language classification, achieving extraordinary accuracy and human-like video understanding.... |
SourceID | proquest crossref elsevier |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 107395 |
SubjectTerms | Artificial neural networks BiLSTM Bio-inspired computing Biomimetics Classification Deep learning Feature extraction Inception Machine learning Real time Real-time classification Sign language Video data |
Title | Intelligent real-time Arabic sign language classification using attention-based inception and BiLSTM |
URI | https://dx.doi.org/10.1016/j.compeleceng.2021.107395 https://www.proquest.com/docview/2594202883 |
Volume | 95 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1LSwMxEA5FQfQgPrFaSwSvsftINil4qcXSqu3FFnoL2SRbVspatF797Wb2UasgFDxu2FmWSTLzze6XbxC6TrzYF7FhxEbCEKqCmCjLPKJoW0UOJlkv73U4HEX9CX2YsmkNdauzMECrLGN_EdPzaF2OtEpvthZpCmd8KeMgs-vnoir5CXbKYZXffPprZyP9IhpTkGb0oh109c3xAto2tJux2cyVioHvxuHH1V856le0zlNQ7wDtl9gRd4rXO0Q1mx2hvTVFwWNkBiuJzSV2eHBOoHm8M1FxqjGQNXD1hRJrAM7AFMonBwMDfoZBbjMnQBLIbwanWcl7wSoz-C59eh4PT9Ckdz_u9knZSIFol5-XREOTyCgyTOlQJ20b6DhIhHa1StsYpmFXK1faOLAdJdR4QewwhauMmeIBCxnX4Snayl4ze4ZwIriylHuaK0GtCZXRPAQVKofMVChUHYnKdVKXKuPQ7GIuKzrZi1zzugSvy8LrdRSsTBeF1MYmRrfV_Mgf60a6lLCJeaOaU1lu3nfpKkLq7hEiPP_f0y_QLlwV3L8G2lq-fdhLh2GWcTNfpE203Rk89kdfNZTyqw |
linkProvider | Elsevier |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV3fa9swED5CAt36UNb9oO26VYW9iji2ZCuwlzSsJMuPl6XQNyFLcnApbmnT_793ttxmhUFgr7LPmJN095396TuAH0WUD1TuJPepclyYOOfGy4gbMTQpwiQf1b0OF8t0ciV-X8vrDozbszBEqwyxv4npdbQOI_3gzf59WdIZXyEzktkd1KIqWAL1SJ1KdqE3ms4my63jkYMmIAtSZ4zSPTh_pXkRc5s6zvhqjdViPMBx-nf1rzT1JmDXWejyAxwE-MhGzRseQsdXH2F_S1TwE7jpi8rmhiEkvOXUPx5NTF5aRnwN1n6kZJawM5GF6vlhRIJfM1LcrDmQnFKcY2UVqC_MVI5dlPM_q8VnuLr8tRpPeOilwC2m6A231CcyTZ00NrHF0Mc2jwtlsVwZOictbWyD1Q3i7bQQLopzhBVYHEuTxTKRmU2-QLe6q_wRsEJlxosssplRwrvEOJslJESF4MwkyhyDal2nbRAap34Xt7pllN3oLa9r8rpuvH4M8YvpfaO2sYvRz3Z-9F9LR2NW2MX8tJ1THfbvo8aiUOA9SiUn__f0M3g3WS3mej5dzr7Ce7rSUAFPobt5ePLfENJs8u9hyT4DhEb1XA |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Intelligent+real-time+Arabic+sign+language+classification+using+attention-based+inception+and+BiLSTM&rft.jtitle=Computers+%26+electrical+engineering&rft.au=Abdul%2C+Wadood&rft.au=Alsulaiman%2C+Mansour&rft.au=Amin%2C+Syed+Umar&rft.au=Faisal%2C+Mohammed&rft.date=2021-10-01&rft.pub=Elsevier+BV&rft.issn=0045-7906&rft.eissn=1879-0755&rft.volume=95&rft.spage=1&rft_id=info:doi/10.1016%2Fj.compeleceng.2021.107395&rft.externalDBID=NO_FULL_TEXT |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0045-7906&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0045-7906&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0045-7906&client=summon |