Deep motion templates and extreme learning machine for sign language recognition
Sign language is a visual language used by persons with hearing and speech impairment to communicate through fingerspellings and body gestures. This paper proposes a framework for automatic sign language recognition without the need of hand segmentation. The proposed method first generates three dif...
Saved in:
Published in | The Visual computer Vol. 36; no. 6; pp. 1233 - 1246 |
---|---|
Main Authors | , |
Format | Journal Article |
Language | English |
Published |
Berlin/Heidelberg
Springer Berlin Heidelberg
01.06.2020
Springer Nature B.V |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Sign language is a visual language used by persons with
hearing and speech impairment
to communicate through fingerspellings and body gestures. This paper proposes a framework for automatic sign language recognition without the need of hand segmentation. The proposed method first generates three different types of motion templates: motion history image, dynamic image and our proposed RGB motion image. These motion templates are used to fine-tune three ConvNets trained on ImageNet dataset. Fine-tuning avoids learning all the parameters from scratch, leading to faster network convergence even with a small number of training samples. For combining the output of three ConvNets, we propose a fusion technique based on Kernel-based extreme learning machine (KELM). The features extracted from the last fully connected layer of trained ConvNets are used to train three KELMs, and the final class label is predicted by averaging their scores. The proposed approach is validated on a number of publicly available sign language as well as human action recognition datasets, and state-of-the-art results are achieved. Finally, an Indian sign language dataset is also collected using a thermal camera. The experimental results obtained show that our ConvNet-based deep features along with proposed KELM-based fusion are robust for any type of human motion recognition. |
---|---|
AbstractList | Sign language is a visual language used by persons with
hearing and speech impairment
to communicate through fingerspellings and body gestures. This paper proposes a framework for automatic sign language recognition without the need of hand segmentation. The proposed method first generates three different types of motion templates: motion history image, dynamic image and our proposed RGB motion image. These motion templates are used to fine-tune three ConvNets trained on ImageNet dataset. Fine-tuning avoids learning all the parameters from scratch, leading to faster network convergence even with a small number of training samples. For combining the output of three ConvNets, we propose a fusion technique based on Kernel-based extreme learning machine (KELM). The features extracted from the last fully connected layer of trained ConvNets are used to train three KELMs, and the final class label is predicted by averaging their scores. The proposed approach is validated on a number of publicly available sign language as well as human action recognition datasets, and state-of-the-art results are achieved. Finally, an Indian sign language dataset is also collected using a thermal camera. The experimental results obtained show that our ConvNet-based deep features along with proposed KELM-based fusion are robust for any type of human motion recognition. Sign language is a visual language used by persons with hearing and speech impairment to communicate through fingerspellings and body gestures. This paper proposes a framework for automatic sign language recognition without the need of hand segmentation. The proposed method first generates three different types of motion templates: motion history image, dynamic image and our proposed RGB motion image. These motion templates are used to fine-tune three ConvNets trained on ImageNet dataset. Fine-tuning avoids learning all the parameters from scratch, leading to faster network convergence even with a small number of training samples. For combining the output of three ConvNets, we propose a fusion technique based on Kernel-based extreme learning machine (KELM). The features extracted from the last fully connected layer of trained ConvNets are used to train three KELMs, and the final class label is predicted by averaging their scores. The proposed approach is validated on a number of publicly available sign language as well as human action recognition datasets, and state-of-the-art results are achieved. Finally, an Indian sign language dataset is also collected using a thermal camera. The experimental results obtained show that our ConvNet-based deep features along with proposed KELM-based fusion are robust for any type of human motion recognition. |
Author | Raman, Balasubramanian Imran, Javed |
Author_xml | – sequence: 1 givenname: Javed surname: Imran fullname: Imran, Javed email: jimran@cs.iitr.ac.in organization: Department of Computer Science and Engineering, Indian Institute of Technology Roorkee – sequence: 2 givenname: Balasubramanian surname: Raman fullname: Raman, Balasubramanian organization: Department of Computer Science and Engineering, Indian Institute of Technology Roorkee |
BookMark | eNp9kE9LxDAQxYOs4O7qF_AU8FxNmmzTHGX9C4Ie9BzSdFKztMmaZEG_vV0rCB72MAwzvN_M4y3QzAcPCJ1TckkJEVeJECZoQagcS5Srgh2hOeWsLEpGVzM0H7d1UYpanqBFShsyzoLLOXq5AdjiIWQXPM4wbHudIWHtWwyfOcIAuAcdvfMdHrR5dx6wDREn13nca9_tdAc4ggmdd_sjp-jY6j7B2W9fore729f1Q_H0fP-4vn4qDKMyF1XTNpwaYNwQUUreNpZWLQhmWclkY_hKSmoF1by1VrdQ88o2lDHJqsq0lWFLdDHd3cbwsYOU1Sbsoh9fqlLSmgjGaz6q6kllYkgpglXGZb33maN2vaJE7fNTU35qzE_95KfYiJb_0G10g45fhyE2QWkU-w7in6sD1Dev0IV7 |
CitedBy_id | crossref_primary_10_1016_j_eswa_2024_124268 crossref_primary_10_3390_electronics13081591 crossref_primary_10_1007_s00521_021_05802_4 crossref_primary_10_1007_s00521_023_08380_9 crossref_primary_10_3390_informatics8020033 crossref_primary_10_1016_j_infrared_2021_103754 crossref_primary_10_3390_math12081155 crossref_primary_10_1109_ACCESS_2024_3421992 crossref_primary_10_1007_s00371_022_02720_x crossref_primary_10_1016_j_neucom_2024_127479 crossref_primary_10_1007_s00500_023_09103_x crossref_primary_10_1109_ACCESS_2022_3151362 crossref_primary_10_32604_cmc_2021_016264 crossref_primary_10_3390_electronics13071188 crossref_primary_10_1109_ACCESS_2020_3019233 crossref_primary_10_1155_2023_9503961 crossref_primary_10_1007_s00371_021_02225_z crossref_primary_10_1007_s00371_024_03394_3 crossref_primary_10_1109_ACCESS_2022_3204110 crossref_primary_10_1007_s12652_023_04585_x crossref_primary_10_1109_ACCESS_2023_3247761 crossref_primary_10_1007_s00371_022_02530_1 crossref_primary_10_1155_2021_3902030 crossref_primary_10_1109_ACCESS_2020_2990699 crossref_primary_10_1109_ACCESS_2023_3349020 crossref_primary_10_1007_s42979_022_01341_4 crossref_primary_10_1109_ACCESS_2024_3512455 crossref_primary_10_3390_app9204397 crossref_primary_10_1007_s11042_023_14428_8 crossref_primary_10_1016_j_compeleceng_2023_109009 crossref_primary_10_4236_jcc_2024_1212002 crossref_primary_10_1007_s11042_020_10296_8 crossref_primary_10_1007_s12559_023_10182_z crossref_primary_10_7717_peerj_cs_2054 |
Cites_doi | 10.1109/TSMCB.2011.2168604 10.1016/j.neucom.2015.11.005 10.1023/B:STCO.0000035301.49549.88 10.1007/s00371-018-1556-0 10.1109/THMS.2015.2504550 10.1007/s00138-010-0298-4 10.1109/ACCESS.2017.2684186 10.1016/j.neucom.2005.12.126 10.1007/s11042-016-3988-8 10.1007/s00371-018-1519-5 10.3837/tiis.2014.02.009 10.1109/TPAMI.2017.2712608 10.1007/s00371-018-1582-y 10.1109/34.910878 10.1016/j.neucom.2014.06.085 10.1007/s00521-015-2002-0 10.1007/s00371-018-1565-z 10.1007/s00371-018-1585-8 10.1007/s00371-018-1559-x 10.1109/CVPR.2016.331 10.1109/3DV.2014.10 10.1109/ICACCI.2016.7732038 10.1145/2733373.2806296 10.21236/ADA623249 10.1109/CVPR.2016.115 10.1109/CVPR.2017.243 10.1007/978-3-319-16178-5_40 10.1109/CVPR.2017.502 10.1109/CVPR.2016.90 10.1109/CVPR.2009.5206848 10.1109/CVPR.2015.7298594 10.1145/2676585.2676600 10.1145/2393347.2396381 10.1109/WACV.2015.150 10.1109/ICCV.2015.510 10.1109/CVPRW.2016.100 10.1109/CVPR.2015.7298965 10.1007/978-3-319-16178-5_32 10.1145/2658861.2658938 10.1007/978-3-319-29451-3_54 10.1109/ICIP.2015.7351693 10.1109/CVPRW.2014.107 10.1109/ICPR.2016.7899599 10.1109/CVPR.2016.213 10.1109/CVPR.2011.5995407 10.1109/CVPR.2014.81 10.1109/CVPR.2015.7298698 |
ContentType | Journal Article |
Copyright | Springer-Verlag GmbH Germany, part of Springer Nature 2019 Springer-Verlag GmbH Germany, part of Springer Nature 2019. |
Copyright_xml | – notice: Springer-Verlag GmbH Germany, part of Springer Nature 2019 – notice: Springer-Verlag GmbH Germany, part of Springer Nature 2019. |
DBID | AAYXX CITATION 8FE 8FG AFKRA ARAPS AZQEC BENPR BGLVJ CCPQU DWQXO GNUQQ HCIFZ JQ2 K7- P5Z P62 PHGZM PHGZT PKEHL PQEST PQGLB PQQKQ PQUKI |
DOI | 10.1007/s00371-019-01725-3 |
DatabaseName | CrossRef ProQuest SciTech Collection ProQuest Technology Collection ProQuest Central UK/Ireland Advanced Technologies & Aerospace Collection ProQuest Central Essentials ProQuest Central Technology Collection ProQuest One Community College ProQuest Central Korea ProQuest Central Student SciTech Premium Collection ProQuest Computer Science Collection Computer Science Database Advanced Technologies & Aerospace Database ProQuest Advanced Technologies & Aerospace Collection ProQuest Central Premium ProQuest One Academic ProQuest One Academic Middle East (New) ProQuest One Academic Eastern Edition (DO NOT USE) ProQuest One Applied & Life Sciences ProQuest One Academic ProQuest One Academic UKI Edition |
DatabaseTitle | CrossRef Advanced Technologies & Aerospace Collection Computer Science Database ProQuest Central Student Technology Collection ProQuest One Academic Middle East (New) ProQuest Advanced Technologies & Aerospace Collection ProQuest Central Essentials ProQuest Computer Science Collection ProQuest One Academic Eastern Edition SciTech Premium Collection ProQuest One Community College ProQuest Technology Collection ProQuest SciTech Collection ProQuest Central Advanced Technologies & Aerospace Database ProQuest One Applied & Life Sciences ProQuest One Academic UKI Edition ProQuest Central Korea ProQuest Central (New) ProQuest One Academic ProQuest One Academic (New) |
DatabaseTitleList | Advanced Technologies & Aerospace Collection |
Database_xml | – sequence: 1 dbid: 8FG name: ProQuest Technology Collection url: https://search.proquest.com/technologycollection1 sourceTypes: Aggregation Database |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering Computer Science |
EISSN | 1432-2315 |
EndPage | 1246 |
ExternalDocumentID | 10_1007_s00371_019_01725_3 |
GrantInformation_xml | – fundername: SMILE Project, IIT Roorkee |
GroupedDBID | -4Z -59 -5G -BR -EM -Y2 -~C -~X .86 .DC .VR 06D 0R~ 0VY 123 1N0 1SB 2.D 203 28- 29R 2J2 2JN 2JY 2KG 2KM 2LR 2P1 2VQ 2~H 30V 4.4 406 408 409 40D 40E 5QI 5VS 67Z 6NX 6TJ 78A 8TC 8UJ 95- 95. 95~ 96X AAAVM AABHQ AACDK AAHNG AAIAL AAJBT AAJKR AANZL AAOBN AARHV AARTL AASML AATNV AATVU AAUYE AAWCG AAYIU AAYOK AAYQN AAYTO AAYZH ABAKF ABBBX ABBXA ABDPE ABDZT ABECU ABFTV ABHLI ABHQN ABJNI ABJOX ABKCH ABKTR ABMNI ABMQK ABNWP ABQBU ABQSL ABSXP ABTEG ABTHY ABTKH ABTMW ABULA ABWNU ABXPI ACAOD ACBXY ACDTI ACGFS ACHSB ACHXU ACKNC ACMDZ ACMLO ACOKC ACOMO ACPIV ACZOJ ADHHG ADHIR ADIMF ADINQ ADKNI ADKPE ADQRH ADRFC ADTPH ADURQ ADYFF ADZKW AEBTG AEFIE AEFQL AEGAL AEGNC AEJHL AEJRE AEKMD AEMSY AENEX AEOHA AEPYU AESKC AETLH AEVLU AEXYK AFBBN AFEXP AFFNX AFGCZ AFKRA AFLOW AFQWF AFWTZ AFZKB AGAYW AGDGC AGGDS AGJBK AGMZJ AGQEE AGQMX AGRTI AGWIL AGWZB AGYKE AHAVH AHBYD AHSBF AHYZX AIAKS AIGIU AIIXL AILAN AITGF AJBLW AJRNO AJZVZ ALMA_UNASSIGNED_HOLDINGS ALWAN AMKLP AMXSW AMYLF AMYQR AOCGG ARAPS ARMRJ ASPBG AVWKF AXYYD AYJHY AZFZN B-. BA0 BBWZM BDATZ BENPR BGLVJ BGNMA BSONS CAG CCPQU COF CS3 CSCUP DDRTE DL5 DNIVK DPUIP DU5 EBLON EBS EIOEI EJD ESBYG FEDTE FERAY FFXSO FIGPU FINBP FNLPD FRRFC FSGXE FWDCC GGCAI GGRSB GJIRD GNWQR GQ6 GQ7 GQ8 GXS H13 HCIFZ HF~ HG5 HG6 HMJXF HQYDN HRMNR HVGLF HZ~ I09 IHE IJ- IKXTQ ITM IWAJR IXC IZIGR IZQ I~X I~Z J-C J0Z JBSCW JCJTX JZLTJ K7- KDC KOV KOW LAS LLZTM M4Y MA- N2Q N9A NB0 NDZJH NPVJJ NQJWS NU0 O9- O93 O9G O9I O9J OAM P19 P2P P9O PF0 PT4 PT5 QOK QOS R4E R89 R9I RHV RIG RNI RNS ROL RPX RSV RZK S16 S1Z S26 S27 S28 S3B SAP SCJ SCLPG SCO SDH SDM SHX SISQX SJYHP SNE SNPRN SNX SOHCF SOJ SPISZ SRMVM SSLCW STPWE SZN T13 T16 TN5 TSG TSK TSV TUC U2A UG4 UOJIU UTJUX UZXMN VC2 VFIZW W23 W48 WK8 YLTOR YOT Z45 Z5O Z7R Z7S Z7X Z7Z Z83 Z86 Z88 Z8M Z8N Z8R Z8T Z8W Z92 ZMTXR ~EX AAPKM AAYXX ABBRH ABDBE ABFSG ACSTC ADHKG ADKFA AEZWR AFDZB AFHIU AFOHR AGQPQ AHPBZ AHWEU AIXLP ATHPR AYFIA CITATION PHGZM PHGZT 8FE 8FG ABRTQ AZQEC DWQXO GNUQQ JQ2 P62 PKEHL PQEST PQGLB PQQKQ PQUKI |
ID | FETCH-LOGICAL-c319t-6bdb41ce34c07294dbf16de73f3239bc45991f71a4dffade846fb1339366cd6c3 |
IEDL.DBID | BENPR |
ISSN | 0178-2789 |
IngestDate | Fri Jul 25 23:35:02 EDT 2025 Tue Jul 01 01:05:48 EDT 2025 Thu Apr 24 22:58:57 EDT 2025 Fri Feb 21 02:34:57 EST 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 6 |
Keywords | Sign language recognition Dynamic image Late fusion Extreme learning machine Motion history image Convolutional neural network |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c319t-6bdb41ce34c07294dbf16de73f3239bc45991f71a4dffade846fb1339366cd6c3 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
PQID | 2918073484 |
PQPubID | 2043737 |
PageCount | 14 |
ParticipantIDs | proquest_journals_2918073484 crossref_citationtrail_10_1007_s00371_019_01725_3 crossref_primary_10_1007_s00371_019_01725_3 springer_journals_10_1007_s00371_019_01725_3 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 20200600 2020-6-00 20200601 |
PublicationDateYYYYMMDD | 2020-06-01 |
PublicationDate_xml | – month: 6 year: 2020 text: 20200600 |
PublicationDecade | 2020 |
PublicationPlace | Berlin/Heidelberg |
PublicationPlace_xml | – name: Berlin/Heidelberg – name: Heidelberg |
PublicationSubtitle | International Journal of Computer Graphics |
PublicationTitle | The Visual computer |
PublicationTitleAbbrev | Vis Comput |
PublicationYear | 2020 |
Publisher | Springer Berlin Heidelberg Springer Nature B.V |
Publisher_xml | – name: Springer Berlin Heidelberg – name: Springer Nature B.V |
References | Liu, Shao (CR36) 2013; 1 Wang, Li, Gao, Zhang, Tang, Ogunbona (CR55) 2016; 46 Chen, Liu, Kehtarnavaz (CR8) 2013; 23 CR38 CR35 CR34 CR32 CR30 Ahad, Tan, Kim, Ishikawa (CR1) 2012; 23 Gao, Zhang, Liu, Xue, Xu (CR18) 2014; 8 CR2 Bobick, Davis (CR6) 2001; 23 CR3 CR5 Tanwar, Buckchash, Raman, Bhargava (CR48) 2018; 78 CR7 Smola, Schölkopf (CR46) 2004; 14 CR9 CR49 CR47 Liu, Liu (CR37) 2016; 175 CR45 CR44 CR43 CR42 CR41 CR40 Jiang, Zhang, Yang (CR31) 2018 Varol, Laptev, Schmid (CR51) 2017; 40 Zheng, Feng, Xu, Hu, Ge (CR58) 2017; 76 Huang, Zhou, Ding, Zhang (CR29) 2012; 42 Ma, Wang, Chen, Xu (CR39) 2018; 34 Bi, Feng, Kim (CR4) 2018; 34 Gao, Zhang, Liu, Xu, Xue (CR20) 2016; 27 CR17 CR16 CR15 CR59 CR14 CR13 CR12 CR56 CR11 CR10 CR54 CR53 CR52 CR50 Li, Huang, Zhao, Wang, Hu (CR33) 2018 Gao, Zhang, Xu, Xue (CR19) 2015; 151 Zhu, Zhang, Shen, Song (CR60) 2017; 5 CR27 CR26 CR25 CR24 CR23 CR22 CR21 Yu, Liu, Liu (CR57) 2017; 34 Huang, Zhu, Siew (CR28) 2006; 70 M Liu (1725_CR37) 2016; 175 GB Huang (1725_CR28) 2006; 70 J Zheng (1725_CR58) 2017; 76 Z Gao (1725_CR19) 2015; 151 G Zhu (1725_CR60) 2017; 5 G Varol (1725_CR51) 2017; 40 1725_CR38 1725_CR32 1725_CR34 C Ma (1725_CR39) 2018; 34 1725_CR35 1725_CR30 Z Gao (1725_CR18) 2014; 8 P Wang (1725_CR55) 2016; 46 GB Huang (1725_CR29) 2012; 42 Z Yu (1725_CR57) 2017; 34 1725_CR5 C Chen (1725_CR8) 2013; 23 MAR Ahad (1725_CR1) 2012; 23 1725_CR7 1725_CR9 1725_CR2 1725_CR25 1725_CR26 1725_CR27 1725_CR3 1725_CR21 1725_CR22 1725_CR23 1725_CR24 L Liu (1725_CR36) 2013; 1 X Li (1725_CR33) 2018 T Jiang (1725_CR31) 2018 VK Tanwar (1725_CR48) 2018; 78 AF Bobick (1725_CR6) 2001; 23 1725_CR14 1725_CR15 1725_CR59 1725_CR16 AJ Smola (1725_CR46) 2004; 14 1725_CR17 1725_CR10 1725_CR54 1725_CR11 1725_CR12 1725_CR56 1725_CR13 1725_CR50 1725_CR52 1725_CR53 1725_CR47 1725_CR49 1725_CR43 1725_CR44 1725_CR45 L Bi (1725_CR4) 2018; 34 1725_CR40 1725_CR41 Z Gao (1725_CR20) 2016; 27 1725_CR42 |
References_xml | – ident: CR45 – ident: CR22 – volume: 42 start-page: 513 issue: 2 year: 2012 end-page: 529 ident: CR29 article-title: Extreme learning machine for regression and multiclass classification publication-title: IEEE Trans. Syst. Man Cybern. B Cybern. doi: 10.1109/TSMCB.2011.2168604 – volume: 175 start-page: 747 year: 2016 end-page: 758 ident: CR37 article-title: Depth context: a new descriptor for human activity recognition by using sole depth sequences publication-title: Neurocomputing doi: 10.1016/j.neucom.2015.11.005 – volume: 1 start-page: 1493 year: 2013 end-page: 1500 ident: CR36 article-title: Learning discriminative representations from RGB-D video data publication-title: IJCAI – ident: CR49 – ident: CR16 – ident: CR12 – volume: 14 start-page: 199 issue: 3 year: 2004 end-page: 222 ident: CR46 article-title: A tutorial on support vector regression publication-title: Stat. Comput. doi: 10.1023/B:STCO.0000035301.49549.88 – ident: CR35 – volume: 34 start-page: 1053 issue: 6–8 year: 2018 end-page: 1063 ident: CR39 article-title: Hand joints-based gesture recognition for noisy dataset using nested interval unscented Kalman filter with LSTM network publication-title: Vis. Comput. doi: 10.1007/s00371-018-1556-0 – ident: CR54 – volume: 46 start-page: 498 issue: 4 year: 2016 end-page: 509 ident: CR55 article-title: Action recognition from depth maps using deep convolutional neural networks publication-title: IEEE Trans. Hum.–Mach. Syst. doi: 10.1109/THMS.2015.2504550 – ident: CR25 – ident: CR42 – volume: 23 start-page: 255 issue: 2 year: 2012 end-page: 281 ident: CR1 article-title: Motion history image: its variants and applications publication-title: Mach. Vis. Appl. doi: 10.1007/s00138-010-0298-4 – ident: CR21 – ident: CR15 – ident: CR50 – volume: 78 start-page: 1 year: 2018 end-page: 26 ident: CR48 article-title: Dense motion analysis of German finger spellings publication-title: Multimed. Tools Appl. – ident: CR11 – ident: CR9 – ident: CR32 – volume: 23 start-page: 1 year: 2013 end-page: 9 ident: CR8 article-title: Real-time human action recognition based on depth motion maps publication-title: J. Real-Time Image Process. – ident: CR5 – ident: CR26 – volume: 5 start-page: 4517 year: 2017 end-page: 4524 ident: CR60 article-title: Multimodal gesture recognition using 3-D convolution and convolutional LSTM publication-title: IEEE Access doi: 10.1109/ACCESS.2017.2684186 – ident: CR43 – ident: CR47 – ident: CR14 – volume: 70 start-page: 489 issue: 1–3 year: 2006 end-page: 501 ident: CR28 article-title: Extreme learning machine: theory and applications publication-title: Neurocomputing doi: 10.1016/j.neucom.2005.12.126 – ident: CR2 – ident: CR53 – ident: CR30 – ident: CR10 – volume: 76 start-page: 20525 issue: 20 year: 2017 end-page: 20544 ident: CR58 article-title: Fusing shape and spatio-temporal features for depth-based dynamic hand gesture recognition publication-title: Multimed. Tools Appl. doi: 10.1007/s11042-016-3988-8 – volume: 34 start-page: 1 year: 2018 end-page: 10 ident: CR4 article-title: Dual-path adversarial learning for fully convolutional network (fcn)-based medical image segmentation publication-title: Vis. Comput. doi: 10.1007/s00371-018-1519-5 – ident: CR56 – volume: 8 start-page: 483 issue: 2 year: 2014 end-page: 503 ident: CR18 article-title: Human action recognition using pyramid histograms of oriented gradients and collaborative multi-task learning publication-title: KSII Trans. Internet Inf. Syst. (TIIS) doi: 10.3837/tiis.2014.02.009 – ident: CR40 – volume: 40 start-page: 1510 year: 2017 end-page: 1517 ident: CR51 article-title: Long-term temporal convolutions for action recognition publication-title: IEEE Trans. Pattern Anal. Mach. Intell. doi: 10.1109/TPAMI.2017.2712608 – ident: CR27 – ident: CR23 – year: 2018 ident: CR33 article-title: Learning a convolutional neural network for propagation-based stereo image segmentation publication-title: Vis. Comput. doi: 10.1007/s00371-018-1582-y – ident: CR44 – volume: 23 start-page: 257 issue: 3 year: 2001 end-page: 267 ident: CR6 article-title: The recognition of human movement using temporal templates publication-title: IEEE Trans. Pattern Anal. Mach. Intell. doi: 10.1109/34.910878 – ident: CR3 – volume: 151 start-page: 554 year: 2015 end-page: 564 ident: CR19 article-title: Multi-perspective and multi-modality joint representation and recognition model for 3D action recognition publication-title: Neurocomputing doi: 10.1016/j.neucom.2014.06.085 – ident: CR38 – ident: CR52 – ident: CR17 – ident: CR13 – volume: 27 start-page: 2047 issue: 7 year: 2016 end-page: 2054 ident: CR20 article-title: Human action recognition on depth dataset publication-title: Neural Comput. Appl. doi: 10.1007/s00521-015-2002-0 – ident: CR34 – volume: 34 start-page: 1 year: 2017 end-page: 9 ident: CR57 article-title: Deeper cascaded peak-piloted network for weak expression recognition publication-title: Vis. Comput. – year: 2018 ident: CR31 article-title: Modeling coverage with semantic embedding for image caption generation publication-title: Vis. Comput. doi: 10.1007/s00371-018-1565-z – ident: CR7 – ident: CR59 – ident: CR41 – ident: CR24 – volume: 34 start-page: 1 year: 2018 ident: 1725_CR4 publication-title: Vis. Comput. doi: 10.1007/s00371-018-1519-5 – ident: 1725_CR23 doi: 10.1007/s00371-018-1585-8 – ident: 1725_CR59 doi: 10.1007/s00371-018-1559-x – ident: 1725_CR5 doi: 10.1109/CVPR.2016.331 – ident: 1725_CR11 doi: 10.1109/3DV.2014.10 – ident: 1725_CR30 doi: 10.1109/ICACCI.2016.7732038 – volume: 14 start-page: 199 issue: 3 year: 2004 ident: 1725_CR46 publication-title: Stat. Comput. doi: 10.1023/B:STCO.0000035301.49549.88 – volume: 40 start-page: 1510 year: 2017 ident: 1725_CR51 publication-title: IEEE Trans. Pattern Anal. Mach. Intell. doi: 10.1109/TPAMI.2017.2712608 – year: 2018 ident: 1725_CR33 publication-title: Vis. Comput. doi: 10.1007/s00371-018-1582-y – volume: 34 start-page: 1 year: 2017 ident: 1725_CR57 publication-title: Vis. Comput. – ident: 1725_CR26 – ident: 1725_CR42 – ident: 1725_CR54 doi: 10.1145/2733373.2806296 – volume: 23 start-page: 255 issue: 2 year: 2012 ident: 1725_CR1 publication-title: Mach. Vis. Appl. doi: 10.1007/s00138-010-0298-4 – volume: 78 start-page: 1 year: 2018 ident: 1725_CR48 publication-title: Multimed. Tools Appl. – ident: 1725_CR13 doi: 10.21236/ADA623249 – ident: 1725_CR43 doi: 10.1109/CVPR.2016.115 – ident: 1725_CR32 – volume: 76 start-page: 20525 issue: 20 year: 2017 ident: 1725_CR58 publication-title: Multimed. Tools Appl. doi: 10.1007/s11042-016-3988-8 – ident: 1725_CR27 doi: 10.1109/CVPR.2017.243 – ident: 1725_CR41 doi: 10.1007/978-3-319-16178-5_40 – ident: 1725_CR3 – ident: 1725_CR21 – ident: 1725_CR45 – ident: 1725_CR7 doi: 10.1109/CVPR.2017.502 – ident: 1725_CR24 doi: 10.1109/CVPR.2016.90 – year: 2018 ident: 1725_CR31 publication-title: Vis. Comput. doi: 10.1007/s00371-018-1565-z – ident: 1725_CR12 doi: 10.1109/CVPR.2009.5206848 – ident: 1725_CR47 doi: 10.1109/CVPR.2015.7298594 – ident: 1725_CR50 doi: 10.1145/2676585.2676600 – volume: 23 start-page: 1 year: 2013 ident: 1725_CR8 publication-title: J. Real-Time Image Process. – ident: 1725_CR14 – ident: 1725_CR34 doi: 10.1145/2393347.2396381 – ident: 1725_CR9 doi: 10.1109/WACV.2015.150 – ident: 1725_CR49 doi: 10.1109/ICCV.2015.510 – ident: 1725_CR52 doi: 10.1109/CVPRW.2016.100 – ident: 1725_CR38 doi: 10.1109/CVPR.2015.7298965 – volume: 23 start-page: 257 issue: 3 year: 2001 ident: 1725_CR6 publication-title: IEEE Trans. Pattern Anal. Mach. Intell. doi: 10.1109/34.910878 – volume: 1 start-page: 1493 year: 2013 ident: 1725_CR36 publication-title: IJCAI – ident: 1725_CR44 – ident: 1725_CR16 doi: 10.1007/978-3-319-16178-5_32 – ident: 1725_CR10 doi: 10.1145/2658861.2658938 – volume: 175 start-page: 747 year: 2016 ident: 1725_CR37 publication-title: Neurocomputing doi: 10.1016/j.neucom.2015.11.005 – volume: 5 start-page: 4517 year: 2017 ident: 1725_CR60 publication-title: IEEE Access doi: 10.1109/ACCESS.2017.2684186 – ident: 1725_CR15 – volume: 34 start-page: 1053 issue: 6–8 year: 2018 ident: 1725_CR39 publication-title: Vis. Comput. doi: 10.1007/s00371-018-1556-0 – ident: 1725_CR40 doi: 10.1007/978-3-319-29451-3_54 – volume: 151 start-page: 554 year: 2015 ident: 1725_CR19 publication-title: Neurocomputing doi: 10.1016/j.neucom.2014.06.085 – ident: 1725_CR35 doi: 10.1109/ICIP.2015.7351693 – volume: 70 start-page: 489 issue: 1–3 year: 2006 ident: 1725_CR28 publication-title: Neurocomputing doi: 10.1016/j.neucom.2005.12.126 – volume: 27 start-page: 2047 issue: 7 year: 2016 ident: 1725_CR20 publication-title: Neural Comput. Appl. doi: 10.1007/s00521-015-2002-0 – ident: 1725_CR2 doi: 10.1109/CVPRW.2014.107 – volume: 46 start-page: 498 issue: 4 year: 2016 ident: 1725_CR55 publication-title: IEEE Trans. Hum.–Mach. Syst. doi: 10.1109/THMS.2015.2504550 – ident: 1725_CR56 doi: 10.1109/ICPR.2016.7899599 – volume: 8 start-page: 483 issue: 2 year: 2014 ident: 1725_CR18 publication-title: KSII Trans. Internet Inf. Syst. (TIIS) doi: 10.3837/tiis.2014.02.009 – ident: 1725_CR17 doi: 10.1109/CVPR.2016.213 – ident: 1725_CR53 doi: 10.1109/CVPR.2011.5995407 – volume: 42 start-page: 513 issue: 2 year: 2012 ident: 1725_CR29 publication-title: IEEE Trans. Syst. Man Cybern. B Cybern. doi: 10.1109/TSMCB.2011.2168604 – ident: 1725_CR22 doi: 10.1109/CVPR.2014.81 – ident: 1725_CR25 doi: 10.1109/CVPR.2015.7298698 |
SSID | ssj0017749 |
Score | 2.4564974 |
Snippet | Sign language is a visual language used by persons with
hearing and speech impairment
to communicate through fingerspellings and body gestures. This paper... Sign language is a visual language used by persons with hearing and speech impairment to communicate through fingerspellings and body gestures. This paper... |
SourceID | proquest crossref springer |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 1233 |
SubjectTerms | Accuracy Artificial Intelligence Artificial neural networks Classification Computer Graphics Computer Science Datasets Deep learning Human activity recognition Human motion Image Processing and Computer Vision Machine learning Motion perception Neural networks Original Article Recognition Sign language |
SummonAdditionalLinks | – databaseName: SpringerLink Journals (ICM) dbid: U2A link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV07T8MwED7xWGDgUUAUCvLABpES20masQKqigExUKmbFT_CUtKKlv_PneukgACJOY4V3eXs717fAVxZrq0peYEmnvNIovMVaW1dFAt0NxAhEDsAVVs8ZqOxfJikk9AUtmiq3ZuUpD-p22Y3zy6Hri_V9-Q8jcQmbKfou1Mh15gP2twBAhoPehP0j6jPM7TK_LzH1-tojTG_pUX9bTM8gL0AE9lgpddD2HB1B_abEQwsWGQHdj_xCR7B051zc7YazMOIdGpKSJKVtWV4BlMkkIUpES_s1VdROoaglVERB2sil6ytKZrVxzAe3j_fjqIwMiEyaEvLKNNWy8Q4IQ1RgkurqySzLheV4KLQRqaIB6s8KaWtqtI6RB-VRje1EFlmbGbECWzVs9qdAqMUq-lrnpJhm35c2sKiwqUzsSwReHQhaSSnTOATp7EWU9UyIXtpK5S28tJWogvX7TvzFZvGn6t7jUJUsKyF4kXSj4mSBz_gplHS-vHvu539b_k57HByrX3ApQdby7d3d4H4Y6kv_e_2AU59z8k priority: 102 providerName: Springer Nature |
Title | Deep motion templates and extreme learning machine for sign language recognition |
URI | https://link.springer.com/article/10.1007/s00371-019-01725-3 https://www.proquest.com/docview/2918073484 |
Volume | 36 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV1LTwIxEJ4oXPRgfEZ8kB686Ua2Lfs4GVTAaGKIkQRPm-1jveCCiv_fmdIFNZHrPnqYdqbfvL4BODNcGZ3zFFU85oFE5ytQytigJdDdQIRA7ABUbfEY3Q3l_ag98gG3T19WWdlEZ6jNRFOM_JKnYdIiKhZ5NX0PaGoUZVf9CI11qKMJTpIa1K-7j4OnRR4BwY0DwCH6StTz6dtmXPOcY6tDV5rqhWLeDsTvq2mJN_-kSN3N09uGLQ8ZWWe-xzuwZstd2PxBJLgHg1trp2w-kYcR29SYICTLS8PQ-FIIkPnxEK_szZVPWoZolVH1BqtClmxRTDQp92HY6z7f3AV-VkKgUYlmQaSMkqG2QmriApdGFWFkbCwKwUWqtGwjECziMJemKHJjEXYUCv3TVESRNpEWB1ArJ6U9BEa5VZ0o3iaN1kkrN6nBnZZWt2SOiKMBYSWmTHsicZpnMc4WFMhOtBmKNnOizUQDzhf_TOc0Giu_Pqmkn3mV-syWB6ABF9WOLF__v9rR6tWOYYOTD-0iKydQm3182VMEGjPVhPWk129CvdN_eeg2_dnCp0Pe-QbD9NFv |
linkProvider | ProQuest |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV07TwJBEJ6gFmphfEYUdQut9CK3uxxcYYxREQSNBSZ25-3jbPBAxRj_lL_RmeUO1EQ62ntsMTs7-83rG4B9w5XRMQ_xiFe5J9H58pQy1isLdDcQIRA7AFVb3AaNe3n9UHkowFfeC0NllblNdIba9DTFyI956NfKRMUiT_svHk2NouxqPkJjqBYt-_mBLtvbSfMC9_eA8_pl57zhZVMFPI3qNvACZZT0tRVSE2u2NCrxA2OrIhFchErLCkKmpOrH0iRJbCxe0IlCTy4UQaBNoAWuOwNzEp-Qs1erX42yFgilHNz20TOjDtOsSce16jluPHTcqTqpyiue-H0RjtHtn4Ssu-fqy7CUAVR2NtSoFSjYdBUWf9AWrsHdhbV9Npz_w4jbqkuAlcWpYWjqKeDIsmEUT-zZFWtahtiYUa0IywOkbFS61EvX4X4qMtyA2bSX2k1glMnVNcUrZD90rRyb0KBeSavLMkZ8UwQ_F1OkM9pymp7RjUaEy060EYo2cqKNRBEOR__0h6QdE78u5dKPsgP8Fo3VrQhH-Y6MX_-_2tbk1fZgvtG5aUft5m1rGxY4ee8uplOC2cHru91BiDNQu06vGDxOW5G_AXp8Cpc |
linkToPdf | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1LTwIxEJ4gJkYPPlAjitqDnnQD2y0Le_BARAJiCAdJuK3bx3rBhcga47_yJzrtPlCjJh44b7dppjPtN52ZbwDOJOVSBNRDE29Qi6HzZXEulVVz0N1AhKDZAXS2xcDtjtjtuD4uwHtWC2Oy3bOQZFLToFmaorg6k2E1L3wzTHPoButcnwatW1n76r56e0WnbX7Va-MOn1Paubm_7lppXwFLoMLFlsslZ7ZQDhOaN5tJHtquVA0ndKjjccHqCJrChh0wGYaBVHhFhxx9Oc9xXSFd4eC8K7DKdPUxWtCItvK4BYIpA7ht9M10jWlapvPzmr9ehQt8-y0ka266zjZsphCVtBKd2oGCikqwlbV_IOlpUIKNT1yGuzBsKzUjSVMgogmvJhrFkiCSBM9__QpJ0g4Vj-TJZHAqgoCZ6AQSkr2akjyfaRrtwWgpYt2HYjSN1AEQHd4VTU7r-lARzVogPYnKxpSosQBBTxnsTHK-SLnMdUuNiZ-zMBtp-yht30jbd8pwkf8zS5g8_hxdyTbET6167lPPbtY0HRAu4DLbpMXn32c7_N_wU1gbtjv-XW_QP4J1qj188-5TgWL8_KKOEQbF_MRoHoGHZav6B5K0Egw |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Deep+motion+templates+and+extreme+learning+machine+for+sign+language+recognition&rft.jtitle=The+Visual+computer&rft.au=Imran%2C+Javed&rft.au=Raman%2C+Balasubramanian&rft.date=2020-06-01&rft.issn=0178-2789&rft.eissn=1432-2315&rft.volume=36&rft.issue=6&rft.spage=1233&rft.epage=1246&rft_id=info:doi/10.1007%2Fs00371-019-01725-3&rft.externalDBID=n%2Fa&rft.externalDocID=10_1007_s00371_019_01725_3 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0178-2789&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0178-2789&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0178-2789&client=summon |