EgoGesture: A New Dataset and Benchmark for Egocentric Hand Gesture Recognition
Gesture is a natural interface in human-computer interaction, especially interacting with wearable devices, such as VR/AR helmet and glasses. However, in the gesture recognition community, it lacks of suitable datasets for developing egocentric (first-person view) gesture recognition methods, in par...
Saved in:
Published in | IEEE transactions on multimedia Vol. 20; no. 5; pp. 1038 - 1050 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
Piscataway
IEEE
01.05.2018
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Gesture is a natural interface in human-computer interaction, especially interacting with wearable devices, such as VR/AR helmet and glasses. However, in the gesture recognition community, it lacks of suitable datasets for developing egocentric (first-person view) gesture recognition methods, in particular in the deep learning era. In this paper, we introduce a new benchmark dataset named EgoGesture with sufficient size, variation, and reality to be able to train deep neural networks. This dataset contains more than 24 000 gesture samples and 3 000 000 frames for both color and depth modalities from 50 distinct subjects. We design 83 different static and dynamic gestures focused on interaction with wearable devices and collect them from six diverse indoor and outdoor scenes, respectively, with variation in background and illumination. We also consider the scenario when people perform gestures while they are walking. The performances of several representative approaches are systematically evaluated on two tasks: gesture classification in segmented data and gesture spotting and recognition in continuous data. Our empirical study also provides an in-depth analysis on input modality selection and domain adaptation between different scenes. |
---|---|
AbstractList | Gesture is a natural interface in human-computer interaction, especially interacting with wearable devices, such as VR/AR helmet and glasses. However, in the gesture recognition community, it lacks of suitable datasets for developing egocentric (first-person view) gesture recognition methods, in particular in the deep learning era. In this paper, we introduce a new benchmark dataset named EgoGesture with sufficient size, variation, and reality to be able to train deep neural networks. This dataset contains more than 24 000 gesture samples and 3 000 000 frames for both color and depth modalities from 50 distinct subjects. We design 83 different static and dynamic gestures focused on interaction with wearable devices and collect them from six diverse indoor and outdoor scenes, respectively, with variation in background and illumination. We also consider the scenario when people perform gestures while they are walking. The performances of several representative approaches are systematically evaluated on two tasks: gesture classification in segmented data and gesture spotting and recognition in continuous data. Our empirical study also provides an in-depth analysis on input modality selection and domain adaptation between different scenes. |
Author | Cao, Congqi Lu, Hanqing Zhang, Yifan Cheng, Jian |
Author_xml | – sequence: 1 givenname: Yifan orcidid: 0000-0002-9190-3509 surname: Zhang fullname: Zhang, Yifan email: yfzhang@nlpr.ia.ac.cn organization: National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences and University of Chinese Academy of Sciences, Beijing, China – sequence: 2 givenname: Congqi orcidid: 0000-0002-0217-9791 surname: Cao fullname: Cao, Congqi email: congqi.cao@nlpr.ia.ac.cn organization: National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences and University of Chinese Academy of Sciences, Beijing, China – sequence: 3 givenname: Jian orcidid: 0000-0003-1289-2758 surname: Cheng fullname: Cheng, Jian email: jcheng@nlpr.ia.ac.cn organization: National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences and University of Chinese Academy of Sciences, Beijing, China – sequence: 4 givenname: Hanqing surname: Lu fullname: Lu, Hanqing email: luhq@nlpr.ia.ac.cn organization: National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences and University of Chinese Academy of Sciences, Beijing, China |
BookMark | eNp9kEFPAjEQRhujiYDeTbw08bzYaXe3rTdEBBOQxOB5U8oUF3EXuyXGf283EA8ePM0k872ZyeuS06qukJArYH0Apm8Xs1mfM1B9rpiSuT4hHdApJIxJeRr7jLNEc2DnpNs0G8YgzZjskPloXY-xCXuPd3RAn_GLPphgGgzUVCt6j5V9-zD-nbra05i1WAVfWjppp0eQvqCt11UZyrq6IGfObBu8PNYeeX0cLYaTZDofPw0H08SKTIZEY66EBa6cM5gJaQBXqdaCLx1fsjxd2aUBhxy04WC4W8lcGFACrHLgchQ9cnPYu_P15z7-UWzqva_iyYIzriHlaZbHVH5IWV83jUdX2DKY9s_gTbktgBWtvCLKK1p5xVFeBNkfcOfL6OH7P-T6gJSI-BtXXOtMKvEDgKt7ag |
CODEN | ITMUF8 |
CitedBy_id | crossref_primary_10_1007_s00371_023_03160_x crossref_primary_10_1142_S0218001423560116 crossref_primary_10_1007_s12652_025_04958_4 crossref_primary_10_3390_s25010255 crossref_primary_10_1109_TMM_2023_3271811 crossref_primary_10_2478_cait_2024_0031 crossref_primary_10_1109_THMS_2022_3144951 crossref_primary_10_1016_j_cag_2023_01_008 crossref_primary_10_1109_TIM_2024_3396828 crossref_primary_10_1109_TMM_2022_3164261 crossref_primary_10_1007_s11042_023_15748_5 crossref_primary_10_1080_10447318_2023_2180235 crossref_primary_10_1038_s41597_021_00876_0 crossref_primary_10_1007_s00371_022_02497_z crossref_primary_10_1109_JSEN_2022_3222801 crossref_primary_10_1109_ACCESS_2024_3456436 crossref_primary_10_1109_TII_2019_2944700 crossref_primary_10_1177_1045389X251321972 crossref_primary_10_1155_2021_6043152 crossref_primary_10_1016_j_aei_2023_101939 crossref_primary_10_1371_journal_pone_0267457 crossref_primary_10_1016_j_iswa_2022_200174 crossref_primary_10_1007_s11554_024_01509_6 crossref_primary_10_1007_s11704_023_3396_y crossref_primary_10_1016_j_rcim_2023_102596 crossref_primary_10_1088_1742_6596_2024_1_012037 crossref_primary_10_1142_S0219622022300026 crossref_primary_10_1007_s10055_022_00725_4 crossref_primary_10_1007_s41870_023_01295_7 crossref_primary_10_1109_ACCESS_2024_3482459 crossref_primary_10_3390_jimaging11030091 crossref_primary_10_1016_j_patcog_2020_107416 crossref_primary_10_1109_TMM_2019_2953325 crossref_primary_10_1080_0952813X_2023_2183269 crossref_primary_10_1109_TVCG_2022_3207004 crossref_primary_10_1109_ACCESS_2020_2996779 crossref_primary_10_1016_j_hcc_2024_100280 crossref_primary_10_3390_electronics12040786 crossref_primary_10_1109_TMM_2023_3242551 crossref_primary_10_1109_ACCESS_2021_3131809 crossref_primary_10_1109_ACCESS_2023_3341702 crossref_primary_10_1016_j_dib_2024_110080 crossref_primary_10_3389_frobt_2024_1430740 crossref_primary_10_3390_s19143170 crossref_primary_10_1109_ACCESS_2020_3013016 crossref_primary_10_1109_TPAMI_2020_2986648 crossref_primary_10_35784_acs_2024_13 crossref_primary_10_3389_fnbot_2018_00086 crossref_primary_10_3390_app12042041 crossref_primary_10_1109_TIM_2021_3077967 crossref_primary_10_1109_MMUL_2020_3021799 crossref_primary_10_3390_app9163419 crossref_primary_10_1007_s00521_024_09509_0 crossref_primary_10_1016_j_engappai_2023_107288 crossref_primary_10_1007_s12559_023_10174_z crossref_primary_10_1109_ACCESS_2020_3017869 crossref_primary_10_1109_TMM_2021_3078882 crossref_primary_10_1109_JIOT_2023_3292376 crossref_primary_10_1109_TIP_2021_3087348 crossref_primary_10_1007_s12046_022_02026_7 crossref_primary_10_1109_TCSVT_2022_3165069 crossref_primary_10_3390_s19010059 crossref_primary_10_1007_s00530_024_01282_3 crossref_primary_10_1109_TMM_2019_2919434 crossref_primary_10_1145_3561822 crossref_primary_10_1007_s00521_024_09990_7 crossref_primary_10_1007_s10639_025_13358_2 crossref_primary_10_1016_j_dib_2020_105676 crossref_primary_10_1016_j_comnet_2022_108781 crossref_primary_10_3390_s20185260 crossref_primary_10_2478_jsiot_2024_0006 crossref_primary_10_1371_journal_pone_0285838 crossref_primary_10_1007_s11042_024_18380_z crossref_primary_10_1109_TIP_2021_3070732 crossref_primary_10_37943_18JYLU4904 crossref_primary_10_1007_s11263_024_02095_7 crossref_primary_10_3390_electronics13183737 crossref_primary_10_1016_j_eswa_2020_114499 crossref_primary_10_1016_j_cviu_2021_103252 crossref_primary_10_1016_j_neucom_2022_12_022 crossref_primary_10_2478_amns_2023_2_00873 crossref_primary_10_3390_app13074558 crossref_primary_10_1109_TCSVT_2024_3470996 crossref_primary_10_1145_3698145 crossref_primary_10_3390_math11183848 crossref_primary_10_3390_s20082321 crossref_primary_10_1109_JBHI_2022_3184870 crossref_primary_10_1016_j_rcim_2024_102847 crossref_primary_10_3390_math12091393 crossref_primary_10_1186_s13640_021_00567_1 crossref_primary_10_1016_j_dib_2024_111003 crossref_primary_10_1109_THMS_2022_3144000 crossref_primary_10_1109_ACCESS_2023_3279845 crossref_primary_10_1145_3586207 crossref_primary_10_26782_jmcms_2020_08_00030 crossref_primary_10_1109_TMM_2019_2953375 crossref_primary_10_1109_TIP_2021_3108349 crossref_primary_10_1145_3652149 crossref_primary_10_3390_app10072300 crossref_primary_10_1109_TBIOM_2020_2968216 crossref_primary_10_3233_JIFS_210050 crossref_primary_10_1145_3534600 crossref_primary_10_1016_j_rcim_2025_102957 crossref_primary_10_1109_TNNLS_2023_3295811 crossref_primary_10_1007_s11042_019_08429_9 crossref_primary_10_1016_j_procs_2022_12_106 crossref_primary_10_1109_TIFS_2020_3036218 crossref_primary_10_3390_s21020356 crossref_primary_10_1007_s13042_023_01987_3 crossref_primary_10_1007_s44267_023_00027_6 crossref_primary_10_1016_j_cag_2021_07_007 crossref_primary_10_1038_s41467_024_47580_2 crossref_primary_10_1007_s40747_022_00858_8 crossref_primary_10_1109_TCYB_2020_3012092 crossref_primary_10_1109_JSEN_2024_3445128 crossref_primary_10_1016_j_vrih_2021_05_001 crossref_primary_10_1109_JSEN_2023_3324479 crossref_primary_10_1007_s11042_024_18430_6 crossref_primary_10_1109_TII_2019_2931140 crossref_primary_10_1145_3712276 crossref_primary_10_1007_s00521_022_07651_1 crossref_primary_10_25046_aj050535 crossref_primary_10_29026_oes_2023_230017 crossref_primary_10_1007_s00371_022_02762_1 crossref_primary_10_1002_cav_2137 crossref_primary_10_1109_JSEN_2021_3123443 crossref_primary_10_1007_s00521_022_07826_w crossref_primary_10_1007_s00371_023_03179_0 crossref_primary_10_1007_s42835_021_00972_6 crossref_primary_10_1002_cpe_7159 crossref_primary_10_1016_j_cviu_2022_103489 crossref_primary_10_1088_1742_6596_1544_1_012127 crossref_primary_10_1038_s41598_020_71713_4 crossref_primary_10_1109_TMM_2023_3295899 crossref_primary_10_1109_ACCESS_2020_2986473 crossref_primary_10_7717_peerj_cs_218 crossref_primary_10_1007_s00521_023_08774_9 crossref_primary_10_1109_JSEN_2024_3390794 crossref_primary_10_1007_s00371_021_02259_3 crossref_primary_10_1016_j_cag_2022_07_015 crossref_primary_10_1016_j_ins_2024_121005 |
Cites_doi | 10.1109/TIP.2012.2192742 10.1109/34.598226 10.1109/34.735811 10.1109/ICPR.2016.7899606 10.1109/TPAMI.2008.167 10.1007/s11042-016-4223-3 10.21236/ADA623249 10.1109/ICCV.2017.406 10.1109/ICCV.2013.441 10.1109/TPAMI.2017.2670560 10.1109/TPAMI.2015.2513479 10.1109/ICCV.2015.443 10.1109/CVPRW.2016.100 10.1016/j.imavis.2014.04.005 10.1109/CVPR.2014.223 10.1007/s11042-012-1117-x 10.1109/TSMCC.2007.893280 10.1109/TIP.2015.2456412 10.1109/CVPR.2014.108 10.1109/TITS.2014.2337331 10.1109/CVPR.2016.456 10.1007/978-3-319-17963-6_9 10.1109/CVPRW.2016.53 10.1109/TPAMI.2016.2537323 10.1109/CVPRW.2014.107 10.1145/2702123.2702179 10.1109/ICCV.2015.510 10.1145/2522848.2532590 10.1109/ICCV.2015.226 10.1007/s10462-012-9356-9 10.1109/THMS.2016.2611824 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2018 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2018 |
DBID | 97E RIA RIE AAYXX CITATION 7SC 7SP 8FD JQ2 L7M L~C L~D |
DOI | 10.1109/TMM.2018.2808769 |
DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Xplore (IEEE/IET Electronic Library - IEL) CrossRef Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
DatabaseTitle | CrossRef Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional |
DatabaseTitleList | Technology Research Database |
Database_xml | – sequence: 1 dbid: RIE name: IEEE Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering Computer Science |
EISSN | 1941-0077 |
EndPage | 1050 |
ExternalDocumentID | 10_1109_TMM_2018_2808769 8299578 |
Genre | orig-research |
GrantInformation_xml | – fundername: Youth Innovation Promotion Association of the Chinese Academy of Sciences; Youth Innovation Promotion Association CAS funderid: 10.13039/501100004739 – fundername: National Natural Science Foundation of China grantid: 61332016; 61572500 funderid: 10.13039/501100001809 |
GroupedDBID | -~X 0R~ 29I 4.4 5GY 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACGFS ACIWK AENEX AETIX AGQYO AGSQL AHBIQ AI. AIBXA AKJIK AKQYR ALLEH ALMA_UNASSIGNED_HOLDINGS ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 EBS EJD HZ~ H~9 IFIPE IFJZH IPLJI JAVBF LAI M43 O9- OCL P2P PQQKQ RIA RIE RNS TN5 VH1 ZY4 AAYXX CITATION 7SC 7SP 8FD JQ2 L7M L~C L~D |
ID | FETCH-LOGICAL-c357t-9e683c128ffae537a1ed49932bf2b064dcba1fe219a21a2fd763a1831c8f1f6e3 |
IEDL.DBID | RIE |
ISSN | 1520-9210 |
IngestDate | Mon Jun 30 03:35:03 EDT 2025 Tue Jul 01 01:54:31 EDT 2025 Thu Apr 24 22:53:20 EDT 2025 Wed Aug 27 08:33:46 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 5 |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c357t-9e683c128ffae537a1ed49932bf2b064dcba1fe219a21a2fd763a1831c8f1f6e3 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ORCID | 0000-0003-1289-2758 0000-0002-0217-9791 0000-0002-9190-3509 |
PQID | 2029142456 |
PQPubID | 75737 |
PageCount | 13 |
ParticipantIDs | proquest_journals_2029142456 crossref_citationtrail_10_1109_TMM_2018_2808769 ieee_primary_8299578 crossref_primary_10_1109_TMM_2018_2808769 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2018-05-01 |
PublicationDateYYYYMMDD | 2018-05-01 |
PublicationDate_xml | – month: 05 year: 2018 text: 2018-05-01 day: 01 |
PublicationDecade | 2010 |
PublicationPlace | Piscataway |
PublicationPlace_xml | – name: Piscataway |
PublicationTitle | IEEE transactions on multimedia |
PublicationTitleAbbrev | TMM |
PublicationYear | 2018 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | ref13 ref34 ref12 ref15 liu (ref4) 2013; 1 ref31 wang (ref38) 2016 ref33 ref11 ref10 jiang (ref39) 2015; 16 ref2 ref1 ref17 ref16 ref19 ref18 wan (ref23) 0 wang (ref30) 0 graves (ref36) 2013 wu (ref43) 2014; 19 kim (ref3) 2009; 31 ref46 ref24 ref45 ref26 ref47 ref25 ref20 ref41 ref22 ref44 ref21 cao (ref35) 0 ref28 simonyan (ref32) 2014 ref27 perronnin (ref29) 2010 kurakin (ref5) 0 ref8 escalera (ref7) 0 ref9 ref6 nishida (ref37) 0 lui (ref40) 2012; 13 malgireddy (ref42) 2013; 14 escalera (ref14) 0 |
References_xml | – ident: ref45 doi: 10.1109/TIP.2012.2192742 – ident: ref44 doi: 10.1109/34.598226 – ident: ref17 doi: 10.1109/34.735811 – volume: 14 start-page: 2189 year: 2013 ident: ref42 article-title: Language-motivated approaches to action recognition publication-title: J Mach Learn Res – ident: ref41 doi: 10.1109/ICPR.2016.7899606 – volume: 19 start-page: 552?571 year: 2014 ident: ref43 article-title: Deep dynamic neural networks for gesture segmentation and recognition publication-title: Proc Workshop Eur Conf Comput Vis – volume: 31 start-page: 1415 year: 2009 ident: ref3 article-title: Canonical correlation analysis of video volume tensors for action categorization and detection publication-title: IEEE Trans Pattern Anal Mach Intell doi: 10.1109/TPAMI.2008.167 – ident: ref6 doi: 10.1007/s11042-016-4223-3 – start-page: 365 year: 0 ident: ref7 article-title: Chalearn multi-modal gesture recognition 2013: Grand challenge and workshop summary publication-title: Proc 15th ACM Int Conf Multimodal Int – ident: ref47 doi: 10.21236/ADA623249 – ident: ref46 doi: 10.1109/ICCV.2017.406 – ident: ref24 doi: 10.1109/ICCV.2013.441 – ident: ref21 doi: 10.1109/TPAMI.2017.2670560 – ident: ref26 doi: 10.1109/TPAMI.2015.2513479 – ident: ref11 doi: 10.1109/ICCV.2015.443 – start-page: 872 year: 0 ident: ref30 article-title: Robust 3d action recognition with random occupancy patterns publication-title: Proc 12th Eur Conf Comput Vision – year: 2013 ident: ref36 article-title: Generating sequences with recurrent neural networks publication-title: CoRR – start-page: 143 year: 2010 ident: ref29 article-title: Improving the fisher kernel for large-scale image classification publication-title: 11th Conf Eur Conf Comput Vision – start-page: 1975 year: 0 ident: ref5 article-title: A real time system for dynamic hand gesture recognition with a depth sensor publication-title: Proc IEEE 20th Eur Signal Process Conf – start-page: 3324 year: 0 ident: ref35 article-title: Action recognition with joints-pooled 3d deep convolutional descriptors publication-title: Proc 25th Int Conf Artif Intell – start-page: 459 year: 0 ident: ref14 article-title: ChaLearn looking at people challenge 2014: Dataset and results publication-title: Proc Comput Vision – ident: ref8 doi: 10.1109/CVPRW.2016.100 – start-page: 13 year: 2016 ident: ref38 article-title: Large-scale continuous gesture recognition using convolutional neutral networks publication-title: Proc 23rd Int Conf Pattern Recog – ident: ref27 doi: 10.1016/j.imavis.2014.04.005 – ident: ref33 doi: 10.1109/CVPR.2014.223 – start-page: 682 year: 0 ident: ref37 article-title: Multimodal gesture recognition using multi-stream recurrent neural network publication-title: Proc Pacific-Rim Symp Image Video Technol – ident: ref18 doi: 10.1007/s11042-012-1117-x – ident: ref1 doi: 10.1109/TSMCC.2007.893280 – volume: 1 start-page: 1493 year: 2013 ident: ref4 article-title: Learning discriminative representations from RGB-D video data publication-title: Proc 23rd Int Joint Conf Artif Intell – ident: ref28 doi: 10.1109/TIP.2015.2456412 – ident: ref25 doi: 10.1109/CVPR.2014.108 – ident: ref15 doi: 10.1109/TITS.2014.2337331 – ident: ref16 doi: 10.1109/CVPR.2016.456 – volume: 13 start-page: 3297 year: 2012 ident: ref40 article-title: Human gesture recognition on product manifolds publication-title: J Mach Learn Res – ident: ref19 doi: 10.1007/978-3-319-17963-6_9 – ident: ref10 doi: 10.1109/CVPRW.2016.53 – ident: ref20 doi: 10.1109/TPAMI.2016.2537323 – ident: ref12 doi: 10.1109/CVPRW.2014.107 – ident: ref22 doi: 10.1145/2702123.2702179 – ident: ref34 doi: 10.1109/ICCV.2015.510 – ident: ref13 doi: 10.1145/2522848.2532590 – ident: ref9 doi: 10.1109/ICCV.2015.226 – year: 2014 ident: ref32 article-title: Very deep convolutional networks for large-scale image recognition – volume: 16 start-page: 227 year: 2015 ident: ref39 article-title: Multi-layered gesture recognition with kinect publication-title: J Mach Learn Res – ident: ref2 doi: 10.1007/s10462-012-9356-9 – start-page: 554 year: 0 ident: ref23 article-title: Hand pose estimation from local surface normals publication-title: Proc Eur Conf Comput Vision – ident: ref31 doi: 10.1109/THMS.2016.2611824 |
SSID | ssj0014507 |
Score | 2.6415958 |
Snippet | Gesture is a natural interface in human-computer interaction, especially interacting with wearable devices, such as VR/AR helmet and glasses. However, in the... Gesture is a natural interface in human–computer interaction, especially interacting with wearable devices, such as VR/AR helmet and glasses. However, in the... |
SourceID | proquest crossref ieee |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 1038 |
SubjectTerms | Artificial neural networks Augmented reality Benchmark Benchmark testing Benchmarks Cameras dataset Datasets egocentric vision Empirical analysis first-person view Gesture recognition Machine learning Neural networks Performance evaluation Task analysis Three-dimensional displays Wearable technology |
Title | EgoGesture: A New Dataset and Benchmark for Egocentric Hand Gesture Recognition |
URI | https://ieeexplore.ieee.org/document/8299578 https://www.proquest.com/docview/2029142456 |
Volume | 20 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LS8NAEB5sT3rwVcVqlT14EUzbZJO68earFiEKUsFb2KdKNZGaXvz1zm6S4gvxFpKdsPBt5pGZ-QZgP6RCGRUKT2F04VkKdE9wS7nPI-Ebxk1Abe9wcj0Y3YVX99H9AhzOe2G01q74THftpcvlq1zO7K-yHkPdiSesAQ0M3MperXnGIIxcazSao74XYxxTpyT7cW-cJLaGi3UDZgnY4i8myM1U-aGInXUZrkBS76ssKpl0Z4XoyvdvlI3_3fgqLFduJjkpz8UaLOhsHVbqEQ6k-qLXYekTH2ELbi4e8kvc3Wyqj8kJQQVIznmBdq4gPFPkFGUeX_h0QtDTJbjWVXY-STKyTytBcluXJOXZBtwNL8ZnI6-auOBJGh0VXqwHjEo0WcZwHdEj7muFIRENhAkEOi9KIpBGo5bjgc8Do1A7cVQKvmTGNwNNN6GZ5ZneAkIlFT5Vhuo-Dy39bSgpY0owNggF3mtDrwYhlRUduZ2K8Zy6sKQfpwhbamFLK9jacDCXeC2pOP5Y27IozNdVALShU-OcVt_qG8oFse8SwNu_S-3Aon13WebYgWYxnelddEUKsefO4Af2K9n7 |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3JTsMwEB2xHIADZRVl9YELEmmbOCkOt7KWJSChInGLvAICUlTSC1_P2EkqNiFuUeJRLD1nlszMG4DtkAplVCg8hdGFZynQPcEt5T6PhG8YNwG1vcPJVbt7G57fRXdjsDvqhdFau-Iz3bCXLpev-nJof5U1GepOPGHjMIl2PwqKbq1RziCMXHM0GqSWF2MkUyUlW3GzlyS2ios1AmYp2OIvRshNVfmhip19OalBUu2sKCt5agxz0ZDv30gb_7v1OZgtHU3SKU7GPIzpbAFq1RAHUn7TCzDziZFwEa6P7_unuLvhQO-TDkEVSI54jpYuJzxT5ABlHl744Imgr0twravtfJSka5-WguSmKkrqZ0twe3LcO-x65cwFT9JoL_di3WZUotEyhuuI7nFfKwyKaCBMINB9URKhNBr1HA98HhiF-omjWvAlM75pa7oME1k_0ytAqKTCp8pQ3eKhJcANJWVMCcbaocB7dWhWIKSyJCS3czGeUxeYtOIUYUstbGkJWx12RhKvBRnHH2sXLQqjdSUAdVivcE7Lr_UN5YLYdyng1d-ltmCq20su08uzq4s1mLbvKYoe12EiHwz1Bjomudh05_EDggrdRQ |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=EgoGesture%3A+A+New+Dataset+and+Benchmark+for+Egocentric+Hand+Gesture+Recognition&rft.jtitle=IEEE+transactions+on+multimedia&rft.au=Zhang%2C+Yifan&rft.au=Cao%2C+Congqi&rft.au=Cheng%2C+Jian&rft.au=Lu%2C+Hanqing&rft.date=2018-05-01&rft.pub=The+Institute+of+Electrical+and+Electronics+Engineers%2C+Inc.+%28IEEE%29&rft.issn=1520-9210&rft.eissn=1941-0077&rft.volume=20&rft.issue=5&rft.spage=1038&rft_id=info:doi/10.1109%2FTMM.2018.2808769&rft.externalDBID=NO_FULL_TEXT |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1520-9210&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1520-9210&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1520-9210&client=summon |