Evaluating spatiotemporal interest point features for depth-based action recognition
Human action recognition has lots of real-world applications, such as natural user interface, virtual reality, intelligent surveillance, and gaming. However, it is still a very challenging problem. In action recognition using the visible light videos, the spatiotemporal interest point (STIP) based f...
Saved in:
Published in | Image and vision computing Vol. 32; no. 8; pp. 453 - 464 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
Elsevier B.V
01.08.2014
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Human action recognition has lots of real-world applications, such as natural user interface, virtual reality, intelligent surveillance, and gaming. However, it is still a very challenging problem. In action recognition using the visible light videos, the spatiotemporal interest point (STIP) based features are widely used with good performance. Recently, with the advance of depth imaging technology, a new modality has appeared for human action recognition. It is important to assess the performance and usefulness of the STIP features for action analysis on the new modality of 3D depth map. In this paper, we evaluate the spatiotemporal interest point (STIP) based features for depth-based action recognition. Different interest point detectors and descriptors are combined to form various STIP features. The bag-of-words representation and the SVM classifiers are used for action learning. Our comprehensive evaluation is conducted on four challenging 3D depth databases. Further, we use two schemes to refine the STIP features, one is to detect the interest points in RGB videos and apply to the aligned depth sequences, and the other is to use the human skeleton to remove irrelevant interest points. These refinements can help us have a deeper understanding of the STIP features on 3D depth data. Finally, we investigate a fusion of the best STIP features with the prevalent skeleton features, to present a complementary use of the STIP features for action recognition on 3D data. The fusion approach gives significantly higher accuracies than many state-of-the-art results.
•A comprehensive evaluation of STIP based features on depth-based action recognition•Two schemes to refine STIP features for a deeper understanding of their behaviors•A fusion approach is developed which outperforms many state-of-the-art methods. |
---|---|
AbstractList | Human action recognition has lots of real-world applications, such as natural user interface, virtual reality, intelligent surveillance, and gaming. However, it is still a very challenging problem. In action recognition using the visible light videos, the spatiotemporal interest point (STIP) based features are widely used with good performance. Recently, with the advance of depth imaging technology, a new modality has appeared for human action recognition. It is important to assess the performance and usefulness of the STIP features for action analysis on the new modality of 3D depth map. In this paper, we evaluate the spatiotemporal interest point (STIP) based features for depth-based action recognition. Different interest point detectors and descriptors are combined to form various STIP features. The bag-of-words representation and the SVM classifiers are used for action learning. Our comprehensive evaluation is conducted on four challenging 3D depth databases. Further, we use two schemes to refine the STIP features, one is to detect the interest points in RGB videos and apply to the aligned depth sequences, and the other is to use the human skeleton to remove irrelevant interest points. These refinements can help us have a deeper understanding of the STIP features on 3D depth data. Finally, we investigate a fusion of the best STIP features with the prevalent skeleton features, to present a complementary use of the STIP features for action recognition on 3D data. The fusion approach gives significantly higher accuracies than many state-of-the-art results.
•A comprehensive evaluation of STIP based features on depth-based action recognition•Two schemes to refine STIP features for a deeper understanding of their behaviors•A fusion approach is developed which outperforms many state-of-the-art methods. Human action recognition has lots of real-world applications, such as natural user interface, virtual reality, intelligent surveillance, and gaming. However, it is still a very challenging problem. In action recognition using the visible light videos, the spatiotemporal interest point (STIP) based features are widely used with good performance. Recently, with the advance of depth imaging technology, a new modality has appeared for human action recognition. It is important to assess the performance and usefulness of the STIP features for action analysis on the new modality of 3D depth map. In this paper, we evaluate the spatiotemporal interest point (STIP) based features for depth-based action recognition. Different interest point detectors and descriptors are combined to form various STIP features. The bag-of-words representation and the SVM classifiers are used for action learning. Our comprehensive evaluation is conducted on four challenging 3D depth databases. Further, we use two schemes to refine the STIP features, one is to detect the interest points in RGB videos and apply to the aligned depth sequences, and the other is to use the human skeleton to remove irrelevant interest points. These refinements can help us have a deeper understanding of the STIP features on 3D depth data. Finally, we investigate a fusion of the best STIP features with the prevalent skeleton features, to present a complementary use of the STIP features for action recognition on 3D data. The fusion approach gives significantly higher accuracies than many state-of-the-art results. |
Author | Guo, Guodong Zhu, Yu Chen, Wenbin |
Author_xml | – sequence: 1 givenname: Yu surname: Zhu fullname: Zhu, Yu – sequence: 2 givenname: Wenbin surname: Chen fullname: Chen, Wenbin – sequence: 3 givenname: Guodong surname: Guo fullname: Guo, Guodong email: guodong.guo@mail.wvu.edu |
BookMark | eNqFkE1LxDAQhoMouH78Aw89euk6aZo09SCI-AWCFz2HmE40SzepSXbBf2_KevKgMDBvhvcdMs8R2ffBIyFnFJYUqLhYLd1ab11aNkDbJZQCvkcWVHZNLSmT-2QBjShacnFIjlJaAUAHXb8gL7dbPW50dv69SlPpIeN6ClGPlfMZI6ZcTaHIyqLOm_KubIjVgFP-qN90wqHSpqR8FdGEd-9mfUIOrB4Tnv70Y_J6d_ty81A_Pd8_3lw_1aYFnms9WOiF1NJoyVF0bcutQKp76HumZcsZG_o3aLlhnbVGCtN0lIkytpL1rGXH5Hy3d4rhc1O-qtYuGRxH7TFskqKc94JT0czWy53VxJBSRKuMy_O5PkftRkVBzSjVSu1QqhmlglLAS7j9FZ5iscWv_2JXuxgWBluHUSXj0BscXIGV1RDc3wu-Adnpky4 |
CitedBy_id | crossref_primary_10_1016_j_patcog_2016_05_019 crossref_primary_10_1109_THMS_2018_2850301 crossref_primary_10_3390_electronics10192412 crossref_primary_10_3390_app14146335 crossref_primary_10_3390_s23042182 crossref_primary_10_1007_s11042_018_6875_7 crossref_primary_10_1109_TPAMI_2015_2513479 crossref_primary_10_3390_s16122171 crossref_primary_10_1016_j_imavis_2024_104985 crossref_primary_10_1109_ACCESS_2021_3071581 crossref_primary_10_3390_informatics8010002 crossref_primary_10_1016_j_image_2016_01_003 crossref_primary_10_3390_s17051100 crossref_primary_10_1109_TITS_2014_2337331 crossref_primary_10_1007_s10851_017_0766_9 crossref_primary_10_3390_jimaging11030091 crossref_primary_10_1016_j_imavis_2016_04_004 crossref_primary_10_1016_j_neucom_2020_02_057 crossref_primary_10_1109_JSEN_2018_2884443 crossref_primary_10_1007_s00530_020_00677_2 crossref_primary_10_3390_jimaging5100082 crossref_primary_10_1007_s00371_021_02064_y crossref_primary_10_3233_ICA_190599 crossref_primary_10_3390_electronics9111888 crossref_primary_10_1016_j_patrec_2018_04_035 crossref_primary_10_1016_j_jvcir_2014_11_008 crossref_primary_10_1155_2016_4351435 crossref_primary_10_3389_fnbot_2015_00003 crossref_primary_10_1007_s11370_021_00358_7 crossref_primary_10_1016_j_patrec_2018_05_004 crossref_primary_10_1016_j_patrec_2017_05_004 crossref_primary_10_1109_TCYB_2019_2960481 crossref_primary_10_1007_s11042_018_7032_z crossref_primary_10_1109_ACCESS_2019_2954744 crossref_primary_10_1109_TMM_2017_2786868 crossref_primary_10_1109_TMM_2018_2808769 crossref_primary_10_1007_s11042_020_08875_w crossref_primary_10_1007_s11063_019_10091_z crossref_primary_10_1109_TPAMI_2020_2974454 crossref_primary_10_1007_s00500_018_3364_x crossref_primary_10_1016_j_neucom_2016_03_024 crossref_primary_10_1016_j_patrec_2021_02_013 crossref_primary_10_1016_j_cviu_2016_04_005 crossref_primary_10_1016_j_imavis_2016_05_007 crossref_primary_10_1007_s10462_020_09904_8 crossref_primary_10_1016_j_image_2018_06_013 crossref_primary_10_1109_TPAMI_2016_2640292 crossref_primary_10_3390_app9040716 crossref_primary_10_1007_s12369_019_00513_2 crossref_primary_10_1007_s00138_016_0818_y crossref_primary_10_1109_JSEN_2023_3314728 crossref_primary_10_1016_j_jvcir_2018_08_001 crossref_primary_10_1016_j_patrec_2020_01_010 crossref_primary_10_1145_3291124 crossref_primary_10_1007_s12369_018_0498_z crossref_primary_10_1049_iet_cvi_2016_0326 crossref_primary_10_1109_TCE_2019_2908986 crossref_primary_10_1155_2019_7060491 |
Cites_doi | 10.1006/cviu.1998.0716 10.1016/j.imavis.2009.11.014 10.1016/j.jvcir.2013.03.001 10.1177/0278364913478446 10.1109/TSMCB.2005.861864 10.1007/s11263-005-1838-7 10.1023/A:1010933404324 |
ContentType | Journal Article |
Copyright | 2014 Elsevier B.V. |
Copyright_xml | – notice: 2014 Elsevier B.V. |
DBID | AAYXX CITATION 7SC 8FD JQ2 L7M L~C L~D |
DOI | 10.1016/j.imavis.2014.04.005 |
DatabaseName | CrossRef Computer and Information Systems Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
DatabaseTitle | CrossRef Computer and Information Systems Abstracts Technology Research Database Computer and Information Systems Abstracts – Academic Advanced Technologies Database with Aerospace ProQuest Computer Science Collection Computer and Information Systems Abstracts Professional |
DatabaseTitleList | Computer and Information Systems Abstracts |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Applied Sciences Engineering |
EISSN | 1872-8138 |
EndPage | 464 |
ExternalDocumentID | 10_1016_j_imavis_2014_04_005 S0262885614000651 |
GroupedDBID | --K --M .~1 0R~ 1B1 1~. 1~5 29I 4.4 457 4G. 5GY 5VS 7-5 71M 8P~ 9JN AABNK AACTN AAEDT AAEDW AAIAV AAIKJ AAKOC AALRI AAOAW AAQFI AAQXK AAXUO AAYFN ABBOA ABFNM ABFRF ABJNI ABMAC ABOCM ABTAH ABXDB ABYKQ ACDAQ ACGFO ACGFS ACNNM ACRLP ACZNC ADBBV ADEZE ADJOM ADMUD ADTZH AEBSH AECPX AEFWE AEKER AENEX AFKWA AFTJW AGHFR AGUBO AGYEJ AHHHB AHJVU AHZHX AIALX AIEXJ AIKHN AITUG AJBFU AJOXV ALMA_UNASSIGNED_HOLDINGS AMFUW AMRAJ AOUOD ASPBG AVWKF AXJTR AZFZN BJAXD BKOJK BLXMC CS3 DU5 EBS EFJIC EFLBG EJD EO8 EO9 EP2 EP3 F0J F5P FDB FEDTE FGOYB FIRID FNPLU FYGXN G-Q G8K GBLVA GBOLZ HLZ HVGLF HZ~ IHE J1W JJJVA KOM LG9 M41 MO0 N9A O-L O9- OAUVE OZT P-8 P-9 P2P PC. Q38 R2- RIG RNS ROL RPZ SBC SDF SDG SDP SES SEW SPC SPCBC SST SSV SSZ T5K TN5 UHS UNMZH VOH WUQ XFK XPP ZMT ZY4 ~G- AATTM AAXKI AAYWO AAYXX ABDPE ABWVN ACRPL ACVFH ADCNI ADNMO AEIPS AEUPX AFJKZ AFPUW AFXIZ AGCQF AGQPQ AGRNS AIGII AIIUN AKBMS AKRWK AKYEP ANKPU APXCP BNPGV CITATION SSH 7SC 8FD EFKBS JQ2 L7M L~C L~D |
ID | FETCH-LOGICAL-c405t-adf0968a8ca85e67445f6e1a90993a84533d9b045c37ffc86c27136533f839343 |
IEDL.DBID | .~1 |
ISSN | 0262-8856 |
IngestDate | Tue Aug 05 11:25:31 EDT 2025 Tue Jul 01 00:48:15 EDT 2025 Thu Apr 24 22:55:59 EDT 2025 Fri Feb 23 02:23:39 EST 2024 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 8 |
Keywords | Action recognition STIP feature refinement Evaluation RGB-D sensor Detectors Feature fusion Descriptors Spatiotemporal interest point (STIP) STIP features |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c405t-adf0968a8ca85e67445f6e1a90993a84533d9b045c37ffc86c27136533f839343 |
Notes | ObjectType-Article-2 SourceType-Scholarly Journals-1 ObjectType-Feature-1 content type line 23 |
PQID | 1559651624 |
PQPubID | 23500 |
PageCount | 12 |
ParticipantIDs | proquest_miscellaneous_1559651624 crossref_citationtrail_10_1016_j_imavis_2014_04_005 crossref_primary_10_1016_j_imavis_2014_04_005 elsevier_sciencedirect_doi_10_1016_j_imavis_2014_04_005 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2014-08-01 |
PublicationDateYYYYMMDD | 2014-08-01 |
PublicationDate_xml | – month: 08 year: 2014 text: 2014-08-01 day: 01 |
PublicationDecade | 2010 |
PublicationTitle | Image and vision computing |
PublicationYear | 2014 |
Publisher | Elsevier B.V |
Publisher_xml | – name: Elsevier B.V |
References | Vieira, Nascimento, Oliveira, Liu, Campos (bb0075) 2012 Laptev, Lindeberg (bb0140) 2004; volume 1 Seidenari, Varano, Berretti, Pala, Del Bimbo (bb0185) 2013 L. M., T. V., D. M., T. L., A. W. V., M. F. M. C. (bb0085) 2012; 0 Yang, Tian (bb0080) 2014; 25 Ni, Wang, Moulin (bb0120) 2011 Oreifej, Liu, Redmond (bb0110) 2013 Laptev, Lindeberg (bb0055) 2006 Li, Zhang, Liu (bb0065) 2010 Wong, Cipolla (bb0050) 2007 Jhuang, Serre, Wolf, Poggio (bb0035) 2007 Devanne, Wannous, Berretti, Pala, Daoudi, D. B. A. (bb0170) 2013 Yang, Zhang, Tian (bb0090) 2012 Laptev (bb0025) 2005; 64 Willems, Tuytelaars, Van Gool (bb0045) 2008 Zhao, Liu, Yang, Cheng (bb0125) 2012 Wang, Ullah, Klaser, Laptev, Schmid (bb0015) 2009 Wang, Liu, Wu, Yuan (bb0095) 2012 Gavrila (bb0005) 1999; 73 Sung, Ponce, Selman, Saxena (bb0100) 2012 Laptev, Marszalek, Schmid, Rozenfeld (bb0150) 2008 Ohn-Bar, Trivedi (bb0180) 2013 Liu, Shao (bb0190) 2013 Poppe (bb0010) 2010; 28 Scovanner, Ali, Shah (bb0060) 2007 Zhang, Parker (bb0130) 2011 Klaser, Marszałek, Schmid (bb0145) 2008 Wang, Liu, Chorowski, Chen, Wu (bb0105) 2012 Shotton, Fitzgibbon, Cook, Sharp, Finocchio, Moore, Kipman, Blake (bb0020) 2011 Shabani, Clausi, Zelek (bb0155) 2012 Zhu, Chen, Guo (bb0165) 2013 Dollár, Rabaud, Cottrell, Belongie (bb0030) 2005 Koppula, Gupta, Saxena (bb0115) 2013; 32 Bay, Tuytelaars, Van Gool (bb0160) 2006 Oikonomopoulos, Patras, Pantic (bb0040) 2005; 36 Breiman (bb0175) 2001; 45 Xia, Chen, Aggarwal (bb0070) 2012 Xia, Aggarwal (bb0135) 2013 Bay (10.1016/j.imavis.2014.04.005_bb0160) 2006 Wang (10.1016/j.imavis.2014.04.005_bb0015) 2009 Xia (10.1016/j.imavis.2014.04.005_bb0070) 2012 Shabani (10.1016/j.imavis.2014.04.005_bb0155) 2012 Laptev (10.1016/j.imavis.2014.04.005_bb0055) 2006 Xia (10.1016/j.imavis.2014.04.005_bb0135) 2013 Shotton (10.1016/j.imavis.2014.04.005_bb0020) 2011 Yang (10.1016/j.imavis.2014.04.005_bb0090) 2012 Wang (10.1016/j.imavis.2014.04.005_bb0095) 2012 Laptev (10.1016/j.imavis.2014.04.005_bb0150) 2008 Ni (10.1016/j.imavis.2014.04.005_bb0120) 2011 Li (10.1016/j.imavis.2014.04.005_bb0065) 2010 Koppula (10.1016/j.imavis.2014.04.005_bb0115) 2013; 32 Laptev (10.1016/j.imavis.2014.04.005_bb0140) 2004; volume 1 Dollár (10.1016/j.imavis.2014.04.005_bb0030) 2005 Zhao (10.1016/j.imavis.2014.04.005_bb0125) 2012 Ohn-Bar (10.1016/j.imavis.2014.04.005_bb0180) 2013 Laptev (10.1016/j.imavis.2014.04.005_bb0025) 2005; 64 Wong (10.1016/j.imavis.2014.04.005_bb0050) 2007 Wang (10.1016/j.imavis.2014.04.005_bb0105) 2012 Gavrila (10.1016/j.imavis.2014.04.005_bb0005) 1999; 73 Jhuang (10.1016/j.imavis.2014.04.005_bb0035) 2007 Liu (10.1016/j.imavis.2014.04.005_bb0190) 2013 Scovanner (10.1016/j.imavis.2014.04.005_bb0060) 2007 Klaser (10.1016/j.imavis.2014.04.005_bb0145) 2008 Sung (10.1016/j.imavis.2014.04.005_bb0100) 2012 Devanne (10.1016/j.imavis.2014.04.005_bb0170) 2013 Zhang (10.1016/j.imavis.2014.04.005_bb0130) 2011 Zhu (10.1016/j.imavis.2014.04.005_bb0165) 2013 L. M. (10.1016/j.imavis.2014.04.005_bb0085) 2012; 0 Breiman (10.1016/j.imavis.2014.04.005_bb0175) 2001; 45 Seidenari (10.1016/j.imavis.2014.04.005_bb0185) 2013 Oikonomopoulos (10.1016/j.imavis.2014.04.005_bb0040) 2005; 36 Oreifej (10.1016/j.imavis.2014.04.005_bb0110) 2013 Willems (10.1016/j.imavis.2014.04.005_bb0045) 2008 Vieira (10.1016/j.imavis.2014.04.005_bb0075) 2012 Yang (10.1016/j.imavis.2014.04.005_bb0080) 2014; 25 Poppe (10.1016/j.imavis.2014.04.005_bb0010) 2010; 28 |
References_xml | – start-page: 468 year: 2012 end-page: 475 ident: bb0155 article-title: Evaluation of local spatio-temporal salient feature detectors for human action recognition publication-title: IEEE Ninth Conf. on Computer and Robot Vision – start-page: 465 year: 2013 end-page: 470 ident: bb0180 article-title: Joint angles similarities and hog2 for action recognition publication-title: IEEE Conf. on Computer Vision and Pattern Recognition Workshops – start-page: 1297 year: 2011 end-page: 1304 ident: bb0020 article-title: Real-time human pose recognition in parts from single depth images publication-title: IEEE Conf. on Computer Vision and, Pattern Recognition – start-page: 1493 year: 2013 end-page: 1500 ident: bb0190 article-title: Learning discriminative representations from rgb-d video data publication-title: Proc. Int. Joint Conf. on Artificial Intelligence – volume: 25 start-page: 2 year: 2014 end-page: 11 ident: bb0080 article-title: Effective 3d action recognition using eigenjoints publication-title: J. Vis. Commun. Image Represent. – volume: 32 start-page: 951 year: 2013 end-page: 970 ident: bb0115 article-title: Learning human activities and object affordances from rgb-d videos publication-title: Int. J. Robot. Res. – volume: 36 start-page: 710 year: 2005 end-page: 719 ident: bb0040 article-title: Spatiotemporal salient points for visual recognition of human actions publication-title: IEEE Trans. Syst. Man Cybern. B – start-page: 357 year: 2007 end-page: 360 ident: bb0060 article-title: A 3-dimensional sift descriptor and its application to action recognition publication-title: Proc. of the 15th Int'l Conf. on Multimedia – year: 2013 ident: bb0110 article-title: Hon4d: Histogram of oriented 4d normals for activity recognition from depth sequences publication-title: IEEE Conf. on Computer Vision and Pattern Recognition – start-page: 1 year: 2007 end-page: 8 ident: bb0035 article-title: A biologically inspired system for action recognition publication-title: IEEE 11th Int'l Conf. on Computer Vision – start-page: 1290 year: 2012 end-page: 1297 ident: bb0095 article-title: Mining actionlet ensemble for action recognition with depth cameras publication-title: IEEE Conf. on Computer Vision and, Pattern Recognition – volume: 73 start-page: 82 year: 1999 end-page: 98 ident: bb0005 article-title: The visual analysis of human movement: a survey publication-title: Comput. Vis. Image Underst. – year: 2009 ident: bb0015 article-title: Evaluation of local spatio-temporal features for action recognition publication-title: BMVC British Machine Vision Conf – start-page: 1147 year: 2011 end-page: 1153 ident: bb0120 article-title: Rgbd-hudaact: a color-depth video database for human daily activity recognition publication-title: IEEE Int'l Conf. on Computer Vision Workshops – volume: 64 start-page: 107 year: 2005 end-page: 123 ident: bb0025 article-title: On space–time interest points publication-title: Int. J. Comput. Vis. – start-page: 1 year: 2012 end-page: 4 ident: bb0125 article-title: Combing rgb and depth map features for human activity recognition publication-title: Asia-Pacific Signal Information Processing Association Annual Summit and Conf. – start-page: 2044 year: 2011 end-page: 2049 ident: bb0130 article-title: 4-dimensional local spatio-temporal features for human activity recognition publication-title: IEEE/RSJ Int'l Conf. on Intelligent Robots and Systems – start-page: 2834 year: 2013 end-page: 2841 ident: bb0135 article-title: Spatio-temporal depth cuboid similarity feature for activity recognition using depth camera publication-title: IEEE Conf. on Computer Vision and, Pattern Recognition – volume: 28 start-page: 976 year: 2010 end-page: 990 ident: bb0010 article-title: A survey on vision-based human action recognition publication-title: Image Vis. Comput. – start-page: 479 year: 2013 end-page: 485 ident: bb0185 article-title: Recognizing actions from depth cameras as weakly aligned multi-part bag-of-poses publication-title: Proc. of CVPR Int. Workshop on Human Activity Understanding from 3D Data – start-page: 404 year: 2006 end-page: 417 ident: bb0160 article-title: Surf: Speeded Up Robust Features – year: 2008 ident: bb0145 article-title: A spatio-temporal descriptor based on 3d-gradients publication-title: British Machine Vision Conf – volume: volume 1 start-page: 52 year: 2004 end-page: 56 ident: bb0140 article-title: Velocity adaptation of space-time interest points publication-title: Proc. of the IEEE Int'l Conf. on Pattern Recognition – start-page: 1057 year: 2012 end-page: 1060 ident: bb0090 article-title: Recognizing actions using depth motion maps-based histograms of oriented gradients publication-title: Proc. of the 20th ACM Int'l Conf. on Multimedia – start-page: 9 year: 2010 end-page: 14 ident: bb0065 article-title: Action recognition based on a bag of 3d points publication-title: IEEE Conf. on Computer Vision and Pattern Recognition Workshops – start-page: 252 year: 2012 end-page: 259 ident: bb0075 article-title: Stop: space-time occupancy patterns for 3d action recognition from depth map sequences publication-title: Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications – volume: 0 start-page: 268 year: 2012 end-page: 275 ident: bb0085 article-title: Real-time gesture recognition from depth data through key poses learning and decision forests publication-title: 25th SIBGRAPI Conf. on Graphics, Patterns and Images – start-page: 65 year: 2005 end-page: 72 ident: bb0030 article-title: Behavior recognition via sparse spatio-temporal features publication-title: 2nd Joint IEEE Int'l Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance – start-page: 20 year: 2012 end-page: 27 ident: bb0070 article-title: View invariant human action recognition using histograms of 3d joints publication-title: IEEE Conf. on Computer Vision and Pattern Recognition Workshops – start-page: 872 year: 2012 end-page: 885 ident: bb0105 article-title: Robust 3d action recognition with random occupancy patterns publication-title: Computer Vision—ECCV – start-page: 91 year: 2006 end-page: 103 ident: bb0055 article-title: Local descriptors for spatio-temporal recognition publication-title: Spatial Coherence for Visual Motion Analysis – start-page: 1 year: 2008 end-page: 8 ident: bb0150 article-title: Learning realistic human actions from movies publication-title: IEEE Conf. on Computer Vision and, Pattern Recognition – start-page: 650 year: 2008 end-page: 663 ident: bb0045 article-title: An Efficient Dense and Scale-Invariant Spatio-Temporal Interest Point Detector – start-page: 842 year: 2012 end-page: 849 ident: bb0100 article-title: Unstructured human activity detection from rgbd images publication-title: IEEE Int'l Conf. on Robotics and Automation – start-page: 456 year: 2013 end-page: 464 ident: bb0170 article-title: Space-time pose representation for 3d human action recognition publication-title: ICIAP Workshop on Social Behaviour, Analysis – volume: 45 start-page: 5 year: 2001 end-page: 32 ident: bb0175 article-title: Random forests publication-title: Mach. Learn. – start-page: 1 year: 2007 end-page: 8 ident: bb0050 article-title: Extracting spatiotemporal interest points using global information publication-title: IEEE 11th Int'l Conf. on Computer Vision – start-page: 486 year: 2013 end-page: 491 ident: bb0165 article-title: Fusing spatiotemporal features and joints for 3d action recognition publication-title: IEEE Conf. on Computer Vision and Pattern Recognition Workshops – start-page: 357 year: 2007 ident: 10.1016/j.imavis.2014.04.005_bb0060 article-title: A 3-dimensional sift descriptor and its application to action recognition – start-page: 404 year: 2006 ident: 10.1016/j.imavis.2014.04.005_bb0160 – start-page: 479 year: 2013 ident: 10.1016/j.imavis.2014.04.005_bb0185 article-title: Recognizing actions from depth cameras as weakly aligned multi-part bag-of-poses – start-page: 456 year: 2013 ident: 10.1016/j.imavis.2014.04.005_bb0170 article-title: Space-time pose representation for 3d human action recognition – start-page: 1057 year: 2012 ident: 10.1016/j.imavis.2014.04.005_bb0090 article-title: Recognizing actions using depth motion maps-based histograms of oriented gradients – volume: 73 start-page: 82 year: 1999 ident: 10.1016/j.imavis.2014.04.005_bb0005 article-title: The visual analysis of human movement: a survey publication-title: Comput. Vis. Image Underst. doi: 10.1006/cviu.1998.0716 – volume: 28 start-page: 976 year: 2010 ident: 10.1016/j.imavis.2014.04.005_bb0010 article-title: A survey on vision-based human action recognition publication-title: Image Vis. Comput. doi: 10.1016/j.imavis.2009.11.014 – start-page: 1 year: 2007 ident: 10.1016/j.imavis.2014.04.005_bb0035 article-title: A biologically inspired system for action recognition – start-page: 65 year: 2005 ident: 10.1016/j.imavis.2014.04.005_bb0030 article-title: Behavior recognition via sparse spatio-temporal features – start-page: 468 year: 2012 ident: 10.1016/j.imavis.2014.04.005_bb0155 article-title: Evaluation of local spatio-temporal salient feature detectors for human action recognition – start-page: 465 year: 2013 ident: 10.1016/j.imavis.2014.04.005_bb0180 article-title: Joint angles similarities and hog2 for action recognition – year: 2009 ident: 10.1016/j.imavis.2014.04.005_bb0015 article-title: Evaluation of local spatio-temporal features for action recognition – start-page: 650 year: 2008 ident: 10.1016/j.imavis.2014.04.005_bb0045 – start-page: 9 year: 2010 ident: 10.1016/j.imavis.2014.04.005_bb0065 article-title: Action recognition based on a bag of 3d points – start-page: 1 year: 2008 ident: 10.1016/j.imavis.2014.04.005_bb0150 article-title: Learning realistic human actions from movies – start-page: 1 year: 2007 ident: 10.1016/j.imavis.2014.04.005_bb0050 article-title: Extracting spatiotemporal interest points using global information – start-page: 1493 year: 2013 ident: 10.1016/j.imavis.2014.04.005_bb0190 article-title: Learning discriminative representations from rgb-d video data – start-page: 2834 year: 2013 ident: 10.1016/j.imavis.2014.04.005_bb0135 article-title: Spatio-temporal depth cuboid similarity feature for activity recognition using depth camera – volume: 25 start-page: 2 year: 2014 ident: 10.1016/j.imavis.2014.04.005_bb0080 article-title: Effective 3d action recognition using eigenjoints publication-title: J. Vis. Commun. Image Represent. doi: 10.1016/j.jvcir.2013.03.001 – start-page: 486 year: 2013 ident: 10.1016/j.imavis.2014.04.005_bb0165 article-title: Fusing spatiotemporal features and joints for 3d action recognition – volume: 0 start-page: 268 year: 2012 ident: 10.1016/j.imavis.2014.04.005_bb0085 article-title: Real-time gesture recognition from depth data through key poses learning and decision forests – start-page: 842 year: 2012 ident: 10.1016/j.imavis.2014.04.005_bb0100 article-title: Unstructured human activity detection from rgbd images – start-page: 1 year: 2012 ident: 10.1016/j.imavis.2014.04.005_bb0125 article-title: Combing rgb and depth map features for human activity recognition – start-page: 252 year: 2012 ident: 10.1016/j.imavis.2014.04.005_bb0075 article-title: Stop: space-time occupancy patterns for 3d action recognition from depth map sequences – volume: volume 1 start-page: 52 year: 2004 ident: 10.1016/j.imavis.2014.04.005_bb0140 article-title: Velocity adaptation of space-time interest points – volume: 32 start-page: 951 year: 2013 ident: 10.1016/j.imavis.2014.04.005_bb0115 article-title: Learning human activities and object affordances from rgb-d videos publication-title: Int. J. Robot. Res. doi: 10.1177/0278364913478446 – start-page: 1297 year: 2011 ident: 10.1016/j.imavis.2014.04.005_bb0020 article-title: Real-time human pose recognition in parts from single depth images – start-page: 1147 year: 2011 ident: 10.1016/j.imavis.2014.04.005_bb0120 article-title: Rgbd-hudaact: a color-depth video database for human daily activity recognition – volume: 36 start-page: 710 year: 2005 ident: 10.1016/j.imavis.2014.04.005_bb0040 article-title: Spatiotemporal salient points for visual recognition of human actions publication-title: IEEE Trans. Syst. Man Cybern. B doi: 10.1109/TSMCB.2005.861864 – start-page: 872 year: 2012 ident: 10.1016/j.imavis.2014.04.005_bb0105 article-title: Robust 3d action recognition with random occupancy patterns – start-page: 91 year: 2006 ident: 10.1016/j.imavis.2014.04.005_bb0055 article-title: Local descriptors for spatio-temporal recognition – start-page: 20 year: 2012 ident: 10.1016/j.imavis.2014.04.005_bb0070 article-title: View invariant human action recognition using histograms of 3d joints – start-page: 2044 year: 2011 ident: 10.1016/j.imavis.2014.04.005_bb0130 article-title: 4-dimensional local spatio-temporal features for human activity recognition – start-page: 1290 year: 2012 ident: 10.1016/j.imavis.2014.04.005_bb0095 article-title: Mining actionlet ensemble for action recognition with depth cameras – volume: 64 start-page: 107 year: 2005 ident: 10.1016/j.imavis.2014.04.005_bb0025 article-title: On space–time interest points publication-title: Int. J. Comput. Vis. doi: 10.1007/s11263-005-1838-7 – year: 2008 ident: 10.1016/j.imavis.2014.04.005_bb0145 article-title: A spatio-temporal descriptor based on 3d-gradients – volume: 45 start-page: 5 year: 2001 ident: 10.1016/j.imavis.2014.04.005_bb0175 article-title: Random forests publication-title: Mach. Learn. doi: 10.1023/A:1010933404324 – year: 2013 ident: 10.1016/j.imavis.2014.04.005_bb0110 article-title: Hon4d: Histogram of oriented 4d normals for activity recognition from depth sequences |
SSID | ssj0007079 |
Score | 2.4392798 |
Snippet | Human action recognition has lots of real-world applications, such as natural user interface, virtual reality, intelligent surveillance, and gaming. However,... |
SourceID | proquest crossref elsevier |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 453 |
SubjectTerms | Action recognition Descriptors Detectors Evaluation Feature fusion Feature recognition Human Imaging Recognition RGB-D sensor Spatiotemporal interest point (STIP) State of the art STIP feature refinement STIP features Support vector machines Three dimensional |
Title | Evaluating spatiotemporal interest point features for depth-based action recognition |
URI | https://dx.doi.org/10.1016/j.imavis.2014.04.005 https://www.proquest.com/docview/1559651624 |
Volume | 32 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1bT8IwFG4IvuiDF9SIF1ITXyeMdt32SAgENfIiJLw1W9cqBgcRePW3e07X4SUmJCZ7WJd2WXo55-v6ne8QciN4mKU8NTACPPO4n8UeFJkngiQSXAQmsGr7j0MxGPP7STCpkG4ZC4O0Smf7C5turbV70nS92VxMp80n2D20owiVLK0jtRHsPMRZfvvxRfNABbjiPwusfKhdhs9Zjtf0DUP5keDFreApJrH72z39MtTW-_QPyb6DjbRTfNkRqei8Rg4chKRugS5rZO-bvuAxGfWclnf-TJeWOu2UqGYUZSIwLQddzOGWGm0FPpcUMCzN9GL14qF_y2gR90A3PKN5fkLG_d6oO_BcGgVPARpbeUlmYJ8SJZFKokCLkPPACO0nMYBDlkQcAF8WpwDtFAuNUZFQ7RDJb4wZQE-Ms1NSzee5PiM0VrFBeR8ftyFxmCVMiVQxX7di48cmrBNW9p5UTmMcU13MZEkme5VFn0vsc9mCqxXUibdptSg0NrbUD8uBkT_migQ3sKXldTmOEpYRno0kuZ6vlxJPZ2EOiTY___fbL8gulgp64CWprt7X-gogyypt2DnZIDudu4fB8BNjbeuZ |
linkProvider | Elsevier |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1LS8NAEB6kPagHH1Xx7QpeQ5vuZpMcS6m0PnqxBW9LstnVSk2Lrf_fmWRTVISCkENeG8Ls7sw3yTffAtxIEWapSC32gMg84Wexh4fck0ESSSEDGxRq-49D2R-Lu-fgeQO6VS0M0Sqd7y99euGt3Zmms2ZzPpk0nzB7aEcRKVkWgRRToDqpUwU1qHcG9_3hyiGTCFz5qQUnPzaoKugKmtfknar5ieMlCs1TWsfu7wj1y1cXAeh2D3YccmSd8uX2YcPkDdh1KJK5ObpowPY3icEDGPWcnHf-whYFe9qJUU0ZKUXQyhxsPsNdZk2h8blgCGNZZubLV49CXMbK0ge2ohrN8kMY3_ZG3b7nVlLwNAKypZdkFlOVKIl0EgVGhkIEVho_iREf8iQSiPmyOEV0p3lorY6kbofEf-PcIoDigh9BLZ_l5hhYrGNLCj8-ZSJxmCVcy1Rz37Ri68c2PAFeWU9pJzNOq11MVcUne1OlzRXZXLVwawUn4K1azUuZjTX3h1XHqB_DRWEkWNPyuupHhTOJfo8kuZl9LhT9oMVhJNvi9N9Pv4LN_ujxQT0MhvdnsEVXSrbgOdSWH5_mAhHMMr10I_QLeEDuSg |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Evaluating+spatiotemporal+interest+point+features+for+depth-based+action+recognition&rft.jtitle=Image+and+vision+computing&rft.au=Zhu%2C+Yu&rft.au=Chen%2C+Wenbin&rft.au=Guo%2C+Guodong&rft.date=2014-08-01&rft.issn=0262-8856&rft.volume=32&rft.issue=8&rft.spage=453&rft.epage=464&rft_id=info:doi/10.1016%2Fj.imavis.2014.04.005&rft.externalDBID=n%2Fa&rft.externalDocID=10_1016_j_imavis_2014_04_005 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0262-8856&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0262-8856&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0262-8856&client=summon |