Explore Efficient Local Features from RGB-D Data for One-Shot Learning Gesture Recognition
Availability of handy RGB-D sensors has brought about a surge of gesture recognition research and applications. Among various approaches, one shot learning approach is advantageous because it requires minimum amount of data. Here, we provide a thorough review about one-shot learning gesture recognit...
Saved in:
Published in | IEEE transactions on pattern analysis and machine intelligence Vol. 38; no. 8; pp. 1626 - 1639 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
United States
IEEE
01.08.2016
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Availability of handy RGB-D sensors has brought about a surge of gesture recognition research and applications. Among various approaches, one shot learning approach is advantageous because it requires minimum amount of data. Here, we provide a thorough review about one-shot learning gesture recognition from RGB-D data and propose a novel spatiotemporal feature extracted from RGB-D data, namely mixed features around sparse keypoints (MFSK). In the review, we analyze the challenges that we are facing, and point out some future research directions which may enlighten researchers in this field. The proposed MFSK feature is robust and invariant to scale, rotation and partial occlusions. To alleviate the insufficiency of one shot training samples, we augment the training samples by artificially synthesizing versions of various temporal scales, which is beneficial for coping with gestures performed at varying speed. We evaluate the proposed method on the Chalearn gesture dataset (CGD). The results show that our approach outperforms all currently published approaches on the challenging data of CGD, such as translated, scaled and occluded subsets. When applied to the RGB-D datasets that are not one-shot (e.g., the Cornell Activity Dataset-60 and MSR Daily Activity 3D dataset), the proposed feature also produces very promising results under leave-one-out cross validation or one-shot learning. |
---|---|
AbstractList | Availability of handy RGB-D sensors has brought about a surge of gesture recognition research and applications. Among various approaches, one shot learning approach is advantageous because it requires minimum amount of data. Here, we provide a thorough review about one-shot learning gesture recognition from RGB-D data and propose a novel spatiotemporal feature extracted from RGB-D data, namely mixed features around sparse keypoints (MFSK). In the review, we analyze the challenges that we are facing, and point out some future research directions which may enlighten researchers in this field. The proposed MFSK feature is robust and invariant to scale, rotation and partial occlusions. To alleviate the insufficiency of one shot training samples, we augment the training samples by artificially synthesizing versions of various temporal scales, which is beneficial for coping with gestures performed at varying speed. We evaluate the proposed method on the Chalearn gesture dataset (CGD). The results show that our approach outperforms all currently published approaches on the challenging data of CGD, such as translated, scaled and occluded subsets. When applied to the RGB-D datasets that are not one-shot (e.g., the Cornell Activity Dataset-60 and MSR Daily Activity 3D dataset), the proposed feature also produces very promising results under leave-one-out cross validation or one-shot learning. Availability of handy RGB-D sensors has brought about a surge of gesture recognition research and applications. Among various approaches, one shot learning approach is advantageous because it requires minimum amount of data. Here, we provide a thorough review about one-shot learning gesture recognition from RGB-D data and propose a novel spatiotemporal feature extracted from RGB-D data, namely mixed features around sparse keypoints (MFSK). In the review, we analyze the challenges that we are facing, and point out some future research directions which may enlighten researchers in this field. The proposed MFSK feature is robust and invariant to scale, rotation and partial occlusions. To alleviate the insufficiency of one shot training samples, we augment the training samples by artificially synthesizing versions of various temporal scales, which is beneficial for coping with gestures performed at varying speed. We evaluate the proposed method on the Chalearn gesture dataset (CGD). The results show that our approach outperforms all currently published approaches on the challenging data of CGD, such as translated, scaled and occluded subsets. When applied to the RGB-D datasets that are not one-shot (e.g., the Cornell Activity Dataset-60 and MSR Daily Activity 3D dataset), the proposed feature also produces very promising results under leave-one-out cross validation or one-shot learning.Availability of handy RGB-D sensors has brought about a surge of gesture recognition research and applications. Among various approaches, one shot learning approach is advantageous because it requires minimum amount of data. Here, we provide a thorough review about one-shot learning gesture recognition from RGB-D data and propose a novel spatiotemporal feature extracted from RGB-D data, namely mixed features around sparse keypoints (MFSK). In the review, we analyze the challenges that we are facing, and point out some future research directions which may enlighten researchers in this field. The proposed MFSK feature is robust and invariant to scale, rotation and partial occlusions. To alleviate the insufficiency of one shot training samples, we augment the training samples by artificially synthesizing versions of various temporal scales, which is beneficial for coping with gestures performed at varying speed. We evaluate the proposed method on the Chalearn gesture dataset (CGD). The results show that our approach outperforms all currently published approaches on the challenging data of CGD, such as translated, scaled and occluded subsets. When applied to the RGB-D datasets that are not one-shot (e.g., the Cornell Activity Dataset-60 and MSR Daily Activity 3D dataset), the proposed feature also produces very promising results under leave-one-out cross validation or one-shot learning. |
Author | Li, Stan Z. Jun Wan Guodong Guo |
Author_xml | – sequence: 1 surname: Jun Wan fullname: Jun Wan email: jun.wan@nlpr.ia.ac.cn organization: Nat. Lab. of Pattern Recognition, Inst. of Autom., Beijing, China – sequence: 2 surname: Guodong Guo fullname: Guodong Guo email: guodong.guo@mail.wvu.edu organization: Lane Dept. of Comput. Sci. & Electr. Eng., West Virginia Univ., Morgantown, WV, USA – sequence: 3 givenname: Stan Z. surname: Li fullname: Li, Stan Z. email: szli@nlpr.ia.ac.cn organization: Nat. Lab. of Pattern Recognition, Inst. of Autom., Beijing, China |
BackLink | https://www.ncbi.nlm.nih.gov/pubmed/26731641$$D View this record in MEDLINE/PubMed |
BookMark | eNp9kU1vEzEQhi1URNPCHwAJWeLCZVN_7a59LG0aKqUq6seFi-X1jourjZ3auxL8e5wm5dADl5nL84xm5j1CByEGQOgjJXNKiTq5-3F6dTlnhNZzVlMuWvUGzRhtSKWYYgdoRmjDKimZPERHOT8SQkVN-Dt0yJqW00bQGfq5-L0ZYgK8cM5bD2HEq2jNgC_AjFOCjF2Ka3yz_Fad43MzGuxiwtcBqttfsbBgUvDhAS8hb3F8AzY-BD_6GN6jt84MGT7s-zG6v1jcnX2vVtfLy7PTVWUFbceqE70VgjsrVQ-dlV1dW-N6LomQvJOd4n0p0tqmJ13NJFWqNRYkaZnrRAv8GH3dzd2k-DSVPfTaZwvDYALEKWsqCWsF45wV9Msr9DFOKZTtnqmCNbUq1Oc9NXVr6PUm-bVJf_TL1wogd4BNMecETls_mu3NYzJ-0JTobUD6OSC9DUjvAyoqe6W-TP-v9GkneQD4J7S8kaqc9RfTDpo0 |
CODEN | ITPIDJ |
CitedBy_id | crossref_primary_10_1109_TNNLS_2024_3368341 crossref_primary_10_1177_1729881419892398 crossref_primary_10_1109_TII_2019_2893713 crossref_primary_10_1109_TMM_2021_3117124 crossref_primary_10_1007_s11063_020_10320_w crossref_primary_10_1007_s11704_017_6495_9 crossref_primary_10_3390_s16122171 crossref_primary_10_1016_j_patcog_2018_01_038 crossref_primary_10_1108_AA_11_2018_0228 crossref_primary_10_1016_j_eswa_2019_112829 crossref_primary_10_1080_17460441_2022_2114451 crossref_primary_10_1007_s00138_018_0969_0 crossref_primary_10_3390_s19020239 crossref_primary_10_1016_j_patcog_2017_09_001 crossref_primary_10_1016_j_patrec_2017_12_003 crossref_primary_10_1109_TIP_2021_3108349 crossref_primary_10_1016_j_autcon_2021_103872 crossref_primary_10_1016_j_autcon_2021_103625 crossref_primary_10_1016_j_patcog_2020_107416 crossref_primary_10_1109_TPAMI_2023_3274783 crossref_primary_10_1007_s00521_022_07165_w crossref_primary_10_1016_j_eswa_2023_122538 crossref_primary_10_1109_TCSVT_2017_2749509 crossref_primary_10_1109_TMM_2018_2818329 crossref_primary_10_1109_TMC_2020_3012433 crossref_primary_10_1109_TNNLS_2023_3295811 crossref_primary_10_1109_LSP_2016_2643691 crossref_primary_10_1109_TIP_2021_3124668 crossref_primary_10_1007_s11042_019_08429_9 crossref_primary_10_1109_TPAMI_2019_2916873 crossref_primary_10_1515_nanoph_2024_0572 crossref_primary_10_1109_TCYB_2020_3012092 crossref_primary_10_1016_j_vrih_2021_05_003 crossref_primary_10_1109_JBHI_2018_2819182 crossref_primary_10_1109_TIM_2023_3307768 crossref_primary_10_1109_TMM_2018_2808769 crossref_primary_10_1109_JIOT_2021_3067382 crossref_primary_10_1007_s00138_019_01043_7 crossref_primary_10_1007_s11042_022_12091_z crossref_primary_10_1016_j_cviu_2018_04_007 crossref_primary_10_1016_j_ijinfomgt_2018_03_004 crossref_primary_10_1109_JAS_2019_1911534 crossref_primary_10_3390_s20113226 crossref_primary_10_1109_ACCESS_2018_2815149 crossref_primary_10_1109_ACCESS_2017_2782258 crossref_primary_10_1109_TIP_2021_3087348 crossref_primary_10_1039_C8CC02850H crossref_primary_10_1007_s00138_024_01565_9 crossref_primary_10_1109_TCSVT_2018_2847305 crossref_primary_10_1109_ACCESS_2017_2684186 crossref_primary_10_1155_2022_3978627 crossref_primary_10_1109_TSMC_2017_2680404 crossref_primary_10_1145_3131343 crossref_primary_10_1109_ACCESS_2022_3158667 crossref_primary_10_1109_JSTSP_2017_2747154 crossref_primary_10_1007_s10044_021_00965_1 crossref_primary_10_1007_s00138_018_0996_x crossref_primary_10_1109_ACCESS_2019_2940997 crossref_primary_10_3390_electronics8121511 crossref_primary_10_1016_j_asoc_2018_05_038 |
Cites_doi | 10.5244/C.22.99 10.1109/CVPR.1992.223161 10.1109/TIT.1967.1054010 10.1007/978-3-642-40303-3_19 10.1109/TPAMI.2009.26 10.1007/s00138-014-0596-3 10.1007/978-3-540-73110-8_79 10.1109/TCYB.2013.2276433 10.1109/CVPR.2008.4587756 10.1145/2502081.2502099 10.1109/TSMCB.2009.2039566 10.1109/CVPR.2013.98 10.1109/CVPR.2000.855856 10.1016/j.patcog.2007.02.010 10.1109/MSP.2013.2241312 10.1109/CVPRW.2012.6239185 10.1007/s10462-012-9356-9 10.1109/VSPETS.2005.1570899 10.1109/ICME.2012.8 10.1109/TIP.2014.2328181 10.1109/TSMCC.2007.893280 10.1007/s11263-005-1838-7 10.1109/ICPR.2010.938 10.1109/ROMAN.2014.6926340 10.1016/j.patcog.2014.10.026 10.1109/CVPRW.2012.6239179 10.1016/j.patrec.2013.09.009 10.1023/A:1012487302797 10.1109/CVPR.2007.383346 10.1109/CVPR.2011.5995354 10.1117/1.JEI.23.2.023017 10.1016/j.cviu.2007.09.014 10.1007/978-3-642-38628-2_4 10.1007/s11263-012-0594-8 10.5244/C.23.124 10.1109/34.910878 10.1023/B:VISI.0000029664.99615.94 10.1016/j.imavis.2014.04.005 10.1109/79.790984 10.1109/TCSVT.2012.2203731 10.1016/j.patcog.2007.07.013 10.1109/CVPR.2005.177 10.1109/TPAMI.2006.79 10.1109/CVPR.2012.6247813 10.3389/fnbot.2015.00003 10.1109/CVPR.2001.990517 10.7763/LNSE.2013.V1.73 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2016 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2016 |
DBID | 97E RIA RIE AAYXX CITATION CGR CUY CVF ECM EIF NPM 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
DOI | 10.1109/TPAMI.2015.2513479 |
DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef Medline MEDLINE MEDLINE (Ovid) MEDLINE MEDLINE PubMed Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional MEDLINE - Academic |
DatabaseTitle | CrossRef MEDLINE Medline Complete MEDLINE with Full Text PubMed MEDLINE (Ovid) Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional MEDLINE - Academic |
DatabaseTitleList | MEDLINE MEDLINE - Academic Technology Research Database |
Database_xml | – sequence: 1 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: EIF name: MEDLINE url: https://proxy.k.utb.cz/login?url=https://www.webofscience.com/wos/medline/basic-search sourceTypes: Index Database – sequence: 3 dbid: RIE name: IEEE Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering Computer Science |
EISSN | 2160-9292 1939-3539 |
EndPage | 1639 |
ExternalDocumentID | 4110574491 26731641 10_1109_TPAMI_2015_2513479 7368923 |
Genre | orig-research Journal Article Review |
GrantInformation_xml | – fundername: National Science and Technology Support Program grantid: sharp2013BAK02B01 – fundername: Chinese Academy of Sciences grantid: KGZD-EW-102-2 funderid: 10.13039/501100002367 – fundername: AuthenMetric R&D Funds – fundername: Chinese National Natural Science Foundation grantid: sharp61203267; sharp61375037; sharp61473291; sharp61572501; sharp61502491 funderid: 10.13039/501100001809 |
GroupedDBID | --- -DZ -~X .DC 0R~ 29I 4.4 53G 5GY 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACGFS ACIWK ACNCT AENEX AGQYO AGSQL AHBIQ AKQYR ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 E.L EBS EJD F5P HZ~ IEDLZ IFIPE IPLJI JAVBF LAI M43 MS~ O9- OCL P2P PQQKQ RIA RIE RNS RXW TAE TN5 UHB ~02 AAYXX CITATION RIG 5VS 9M8 AAYOK ABFSI ADRHT AETIX AI. AIBXA AKJIK ALLEH CGR CUY CVF ECM EIF FA8 H~9 IBMZZ ICLAB IFJZH NPM PKN RIC RNI RZB VH1 XJT Z5M 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
ID | FETCH-LOGICAL-c417t-b4dc443fc89debc8b55cafd380483b8b93d8b98cc6d0b5281997ace8072fb47e3 |
IEDL.DBID | RIE |
ISSN | 0162-8828 1939-3539 |
IngestDate | Fri Jul 11 05:02:04 EDT 2025 Sun Jun 29 12:46:29 EDT 2025 Wed Feb 19 02:42:32 EST 2025 Tue Jul 01 03:18:22 EDT 2025 Thu Apr 24 23:05:31 EDT 2025 Wed Aug 27 02:57:59 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 8 |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c417t-b4dc443fc89debc8b55cafd380483b8b93d8b98cc6d0b5281997ace8072fb47e3 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 ObjectType-Review-3 content type line 23 |
PMID | 26731641 |
PQID | 1802274659 |
PQPubID | 85458 |
PageCount | 14 |
ParticipantIDs | pubmed_primary_26731641 crossref_primary_10_1109_TPAMI_2015_2513479 crossref_citationtrail_10_1109_TPAMI_2015_2513479 ieee_primary_7368923 proquest_miscellaneous_1802742332 proquest_journals_1802274659 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2016-Aug.-1 2016-8-1 2016-08-00 20160801 |
PublicationDateYYYYMMDD | 2016-08-01 |
PublicationDate_xml | – month: 08 year: 2016 text: 2016-Aug.-1 day: 01 |
PublicationDecade | 2010 |
PublicationPlace | United States |
PublicationPlace_xml | – name: United States – name: New York |
PublicationTitle | IEEE transactions on pattern analysis and machine intelligence |
PublicationTitleAbbrev | TPAMI |
PublicationTitleAlternate | IEEE Trans Pattern Anal Mach Intell |
PublicationYear | 2016 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | ref57 ref13 ref12 ref15 ref58 ref14 ref53 qian (ref4) 2013; 7 ref55 ref11 ref54 ref10 kone?n? (ref19) 2014; 15 ref18 lui (ref16) 2012; 13 ref51 ref50 liu (ref68) 0 ref45 ref48 ref47 ref42 ref41 ref49 ref7 wan (ref9) 2013; 14 jiang (ref38) 2015; 16 ref3 ref6 ref5 pfister (ref8) 0 ref40 escalante (ref28) 0 ref35 ref37 fanello (ref36) 2013; 14 ref31 ref30 ref33 ref32 ref1 ref39 lee (ref44) 0 lucas (ref25) 0 pigou (ref2) 0 mcauliffe (ref43) 0 vapnik (ref46) 1998; 1 malgireddy (ref27) 2013; 14 dalal (ref56) 0 goussies (ref17) 2014; 15 ref24 ref67 ref23 ref26 ref64 ref20 ref63 zhang (ref66) 2012; 2 ref22 ref65 ref21 sung (ref59) 0 ref29 otsu (ref34) 1975; 11 ref60 bradski (ref52) 2000 ref62 ref61 |
References_xml | – ident: ref22 doi: 10.5244/C.22.99 – ident: ref10 doi: 10.1109/CVPR.1992.223161 – ident: ref40 doi: 10.1109/TIT.1967.1054010 – year: 0 ident: ref28 article-title: Principal motion components for gesture recognition using a single-example publication-title: arXiv preprint arXiv 1310 4822 – ident: ref13 doi: 10.1007/978-3-642-40303-3_19 – ident: ref49 doi: 10.1109/TPAMI.2009.26 – start-page: 121 year: 0 ident: ref43 article-title: Supervised topic models publication-title: Proc Adv Neural Inf Process Syst – ident: ref18 doi: 10.1007/s00138-014-0596-3 – ident: ref6 doi: 10.1007/978-3-540-73110-8_79 – ident: ref64 doi: 10.1109/TCYB.2013.2276433 – ident: ref53 doi: 10.1109/CVPR.2008.4587756 – ident: ref65 doi: 10.1145/2502081.2502099 – volume: 2 start-page: 12 year: 2012 ident: ref66 article-title: RGB-D camera-based daily living activity recognition publication-title: Comput Vis Image Processing – ident: ref15 doi: 10.1109/TSMCB.2009.2039566 – ident: ref67 doi: 10.1109/CVPR.2013.98 – ident: ref32 doi: 10.1109/CVPR.2000.855856 – year: 2000 ident: ref52 publication-title: Dr Dobb's Journal of Software Tools – volume: 16 start-page: 227 year: 2015 ident: ref38 article-title: Multi-layered gesture recognition with Kinect publication-title: J Mach Learning Res – ident: ref41 doi: 10.1016/j.patcog.2007.02.010 – ident: ref5 doi: 10.1109/MSP.2013.2241312 – ident: ref26 doi: 10.1109/CVPRW.2012.6239185 – ident: ref3 doi: 10.1007/s10462-012-9356-9 – ident: ref57 doi: 10.1109/VSPETS.2005.1570899 – volume: 13 start-page: 3297 year: 2012 ident: ref16 article-title: Human gesture recognition on product manifolds publication-title: J Mach Learning Res – ident: ref50 doi: 10.1109/ICME.2012.8 – volume: 1 year: 1998 ident: ref46 publication-title: Statistical Learning Theory – ident: ref37 doi: 10.1109/TIP.2014.2328181 – ident: ref1 doi: 10.1109/TSMCC.2007.893280 – ident: ref58 doi: 10.1007/s11263-005-1838-7 – ident: ref42 doi: 10.1109/ICPR.2010.938 – ident: ref63 doi: 10.1109/ROMAN.2014.6926340 – start-page: 1493 year: 0 ident: ref68 article-title: Learning discriminative representations from RGB-D video data publication-title: Proc Int Joint Conf Artif Intell – volume: 11 start-page: 23 year: 1975 ident: ref34 article-title: A threshold selection method from gray-level histograms publication-title: Automatica – ident: ref33 doi: 10.1016/j.patcog.2014.10.026 – ident: ref11 doi: 10.1109/CVPRW.2012.6239179 – ident: ref30 doi: 10.1016/j.patrec.2013.09.009 – ident: ref47 doi: 10.1023/A:1012487302797 – ident: ref12 doi: 10.1109/CVPR.2007.383346 – ident: ref45 doi: 10.1109/CVPR.2011.5995354 – ident: ref20 doi: 10.1117/1.JEI.23.2.023017 – volume: 14 start-page: 2617 year: 2013 ident: ref36 article-title: Keep it simple and sparse: Real-time action recognition publication-title: J Mach Learning Res – ident: ref24 doi: 10.1016/j.cviu.2007.09.014 – ident: ref35 doi: 10.1007/978-3-642-38628-2_4 – start-page: 674 year: 0 ident: ref25 article-title: An iterative image registration technique with an application to stereo vision publication-title: Proc 7th Int Joint Conf Artif Intell – ident: ref55 doi: 10.1007/s11263-012-0594-8 – ident: ref54 doi: 10.5244/C.23.124 – ident: ref39 doi: 10.1109/34.910878 – ident: ref23 doi: 10.1023/B:VISI.0000029664.99615.94 – ident: ref61 doi: 10.1016/j.imavis.2014.04.005 – ident: ref48 doi: 10.1109/79.790984 – ident: ref14 doi: 10.1109/TCSVT.2012.2203731 – start-page: 801 year: 0 ident: ref44 article-title: Efficient sparse coding algorithms publication-title: Proc Adv Neural Inf Process Syst – volume: 7 start-page: 203 year: 2013 ident: ref4 article-title: Developing a gesture based remote human-robot interaction system using Kinect publication-title: Int J Smart Home – start-page: 428 year: 0 ident: ref56 article-title: Human detection using oriented histograms of flow and appearance publication-title: Proc Computer Vision – volume: 14 start-page: 2549 year: 2013 ident: ref9 article-title: One-shot learning gesture recognition from RGB-D data using bag of features publication-title: J Mach Learning Res – ident: ref7 doi: 10.1016/j.patcog.2007.07.013 – ident: ref21 doi: 10.1109/CVPR.2005.177 – ident: ref31 doi: 10.1109/TPAMI.2006.79 – start-page: 814 year: 0 ident: ref8 article-title: Domain-adaptive discriminative one-shot learning of gestures publication-title: Proc 13th Eur Conf Comput Vis – ident: ref60 doi: 10.1109/CVPR.2012.6247813 – ident: ref62 doi: 10.3389/fnbot.2015.00003 – volume: 15 start-page: 3667 year: 2014 ident: ref17 article-title: Transfer learning decision forests for gesture recognition publication-title: J Mach Learning Res – ident: ref51 doi: 10.1109/CVPR.2001.990517 – start-page: 842 year: 0 ident: ref59 article-title: Unstructured human activity detection from rgbd images publication-title: Proc IEEE Conf Robot Autom – volume: 15 start-page: 2513 year: 2014 ident: ref19 article-title: One-shot-learning gesture recognition using HOG-HOF features publication-title: J Mach Learning Res – volume: 14 start-page: 2189 year: 2013 ident: ref27 article-title: Language-motivated approaches to action recognition publication-title: J Mach Learning Res – start-page: 572 year: 0 ident: ref2 article-title: Sign language recognition using convolutional neural networks publication-title: Proc European Conf Comp Vis Workshop – ident: ref29 doi: 10.7763/LNSE.2013.V1.73 |
SSID | ssj0014503 |
Score | 2.5147939 |
SecondaryResourceType | review_article |
Snippet | Availability of handy RGB-D sensors has brought about a surge of gesture recognition research and applications. Among various approaches, one shot learning... |
SourceID | proquest pubmed crossref ieee |
SourceType | Aggregation Database Index Database Enrichment Source Publisher |
StartPage | 1626 |
SubjectTerms | Algorithms bag of visual words model Datasets Feature extraction gesture reco gnition Gesture recognition Gestures Hidden Markov models Humans One-shot learning Pattern Recognition, Automated RGB-D data Robustness Spatiotemporal phenomena Three-dimensional displays Training |
Title | Explore Efficient Local Features from RGB-D Data for One-Shot Learning Gesture Recognition |
URI | https://ieeexplore.ieee.org/document/7368923 https://www.ncbi.nlm.nih.gov/pubmed/26731641 https://www.proquest.com/docview/1802274659 https://www.proquest.com/docview/1802742332 |
Volume | 38 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3Nb9UwDLe2neDAYAP2YKAgcYO-ta9Jmhz3PRBvoLFJE5cqSRNN2tQiaC_89djphwaCaZeqUp00kW3Fju2fAd6qQmv0Oyj0b3nCZZCJ1sYkUgUulPGFj7CLy1N5csE_XorLFXg_1cJ472PymZ_Ta4zlV43r6Kpsp8ilQoNkFVbRcetrtaaIARexCzJaMKjh6EaMBTKp3jn_srv8QFlcYo6nOZVOEgSwpJ5NPPvjPIoNVv5va8Yz52gdluNq-1ST63nX2rn79ReQ43238xgeDcYn2-2l5Qms-HoD1sfGDmzQ8w14eAulcBO-9Xl6nh1GuAmck32iE5CR-dihu86oRoWdHe8lB-zAtIahIcw-1z75etUg7XD5wo5xq0jOzsacpaZ-ChdHh-f7J8nQkiFxPCvaxPLKcZ4Hp3TlrVNWCGdClStCprfK6rzCh3JOVqkVFKTThXFepcUiWI6MfwZrdVP7LWBZkVXBBBuC4OjEWqvQWc5TU4XCamnEDLKRMaUb8MqpbcZNGf2WVJeRryXxtRz4OoN305jvPVrHndSbxJSJcuDHDLZH_peDQv8sCSgPHXgpcNSb6TOqIsVXTO2brqehwHe-mMHzXm6muUdxe_Hvf76EB7gy2WcWbsNa-6Pzr9Daae3rKOa_AUA59zM |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwzV1Lb9QwEB6VcgAOFFoeCwWMBCeUbR52Yh84FLbtLt0tqGylqpdgO7aQQAnqJkLwW_gr_DfGzkOAgFslLtFKmWQT54v9TWbmG4AnPBMC_Q4X-lc0oKlNAyGkDFJuKePSZMbLLi6O0ukJfXXKTtfg21ALY4zxyWdm7H76WH5R6cZ9KtvJkpQjIelSKA_Nl8_ooK2ezyb4NJ_G8f7e8uU06HoIBJpGWR0oWmhKE6u5KIzSXDGmpS0S7qTUFVciKXDDtU6LUDEXVRKZ1IaHWWwVxSvF816Cy8gzWNxWhw0xCsp832XkTDinoOPSl-SEYmf5Zncxc3ljbIz8wRVrOtHh1HWJotEvK6Bv6fJ3dutXuf0N-N6PT5vc8mHc1Gqsv_4mHfm_DuANuN7Ra7Lbvg83Yc2Um7DRt64g3Uy2Cdd-0mHcgrM2E9GQPS-ogfdA5m6NJ44gN-dmRVwVDjk-eBFMyETWkiDVJ69LE7x9X6Ft93mJHODQojk57rOyqvIWnFzI_d6G9bIqzV0gURYVVlplLaPopivFE54koSxspkQq2QiiHgi57hTZXWOQj7n3zEKRexzlDkd5h6MRPBuO-dTqkfzTesuBYLDsnv8Itnu85d2UtcqdFGCc0ZThUY-H3TjZuAiSLE3VtDYutJ_EI7jT4nQ4dw_ve3_-z0dwZbpczPP57OjwPlzFq0zbPMptWK_PG_MAuV2tHvpXjMC7i4bkD0VbVv4 |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Explore+Efficient+Local+Features+from+RGB-D+Data+for+One-Shot+Learning+Gesture+Recognition&rft.jtitle=IEEE+transactions+on+pattern+analysis+and+machine+intelligence&rft.au=Wan%2C+Jun&rft.au=Guo%2C+Guodong&rft.au=Li%2C+Stan+Z.&rft.date=2016-08-01&rft.issn=0162-8828&rft.eissn=2160-9292&rft.volume=38&rft.issue=8&rft.spage=1626&rft.epage=1639&rft_id=info:doi/10.1109%2FTPAMI.2015.2513479&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TPAMI_2015_2513479 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0162-8828&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0162-8828&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0162-8828&client=summon |