Human Action Recognition From Various Data Modalities: A Review
Human Action Recognition (HAR) aims to understand human behavior and assign a label to each action. It has a wide range of applications, and therefore has been attracting increasing attention in the field of computer vision. Human actions can be represented using various data modalities, such as RGB...
Saved in:
Published in | IEEE transactions on pattern analysis and machine intelligence Vol. 45; no. 3; pp. 3200 - 3225 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
United States
IEEE
01.03.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
ISSN | 0162-8828 1939-3539 2160-9292 1939-3539 |
DOI | 10.1109/TPAMI.2022.3183112 |
Cover
Abstract | Human Action Recognition (HAR) aims to understand human behavior and assign a label to each action. It has a wide range of applications, and therefore has been attracting increasing attention in the field of computer vision. Human actions can be represented using various data modalities, such as RGB, skeleton, depth, infrared, point cloud, event stream, audio, acceleration, radar, and WiFi signal, which encode different sources of useful yet distinct information and have various advantages depending on the application scenarios. Consequently, lots of existing works have attempted to investigate different types of approaches for HAR using various modalities. In this article, we present a comprehensive survey of recent progress in deep learning methods for HAR based on the type of input data modality. Specifically, we review the current mainstream deep learning methods for single data modalities and multiple data modalities, including the fusion-based and the co-learning-based frameworks. We also present comparative results on several benchmark datasets for HAR, together with insightful observations and inspiring future research directions. |
---|---|
AbstractList | Human Action Recognition (HAR) aims to understand human behavior and assign a label to each action. It has a wide range of applications, and therefore has been attracting increasing attention in the field of computer vision. Human actions can be represented using various data modalities, such as RGB, skeleton, depth, infrared, point cloud, event stream, audio, acceleration, radar, and WiFi signal, which encode different sources of useful yet distinct information and have various advantages depending on the application scenarios. Consequently, lots of existing works have attempted to investigate different types of approaches for HAR using various modalities. In this article, we present a comprehensive survey of recent progress in deep learning methods for HAR based on the type of input data modality. Specifically, we review the current mainstream deep learning methods for single data modalities and multiple data modalities, including the fusion-based and the co-learning-based frameworks. We also present comparative results on several benchmark datasets for HAR, together with insightful observations and inspiring future research directions. Human Action Recognition (HAR) aims to understand human behavior and assign a label to each action. It has a wide range of applications, and therefore has been attracting increasing attention in the field of computer vision. Human actions can be represented using various data modalities, such as RGB, skeleton, depth, infrared, point cloud, event stream, audio, acceleration, radar, and WiFi signal, which encode different sources of useful yet distinct information and have various advantages depending on the application scenarios. Consequently, lots of existing works have attempted to investigate different types of approaches for HAR using various modalities. In this article, we present a comprehensive survey of recent progress in deep learning methods for HAR based on the type of input data modality. Specifically, we review the current mainstream deep learning methods for single data modalities and multiple data modalities, including the fusion-based and the co-learning-based frameworks. We also present comparative results on several benchmark datasets for HAR, together with insightful observations and inspiring future research directions.Human Action Recognition (HAR) aims to understand human behavior and assign a label to each action. It has a wide range of applications, and therefore has been attracting increasing attention in the field of computer vision. Human actions can be represented using various data modalities, such as RGB, skeleton, depth, infrared, point cloud, event stream, audio, acceleration, radar, and WiFi signal, which encode different sources of useful yet distinct information and have various advantages depending on the application scenarios. Consequently, lots of existing works have attempted to investigate different types of approaches for HAR using various modalities. In this article, we present a comprehensive survey of recent progress in deep learning methods for HAR based on the type of input data modality. Specifically, we review the current mainstream deep learning methods for single data modalities and multiple data modalities, including the fusion-based and the co-learning-based frameworks. We also present comparative results on several benchmark datasets for HAR, together with insightful observations and inspiring future research directions. |
Author | Sun, Zehua Wang, Gang Bennamoun, Mohammed Liu, Jun Ke, Qiuhong Rahmani, Hossein |
Author_xml | – sequence: 1 givenname: Zehua orcidid: 0000-0002-8568-2121 surname: Sun fullname: Sun, Zehua email: zehua.sun@my.cityu.edu.hk organization: Singapore University of Technology and Design, Singapore – sequence: 2 givenname: Qiuhong orcidid: 0000-0001-9998-3614 surname: Ke fullname: Ke, Qiuhong email: Qiuhong.Ke@monash.edu organization: Monash University, Clayton, VIC, Australia – sequence: 3 givenname: Hossein orcidid: 0000-0003-1920-0371 surname: Rahmani fullname: Rahmani, Hossein email: h.rahmani@lancaster.ac.uk organization: Lancaster University, Lancaster, UK – sequence: 4 givenname: Mohammed orcidid: 0000-0002-6603-3257 surname: Bennamoun fullname: Bennamoun, Mohammed email: mohammed.bennamoun@uwa.edu.au organization: University of Western Australia, Crawley, WA, Australia – sequence: 5 givenname: Gang orcidid: 0000-0002-1816-1457 surname: Wang fullname: Wang, Gang email: wanggang@ntu.edu.sg organization: Alibaba Group, Hangzhou, Zhejiang, China – sequence: 6 givenname: Jun orcidid: 0000-0002-4365-4165 surname: Liu fullname: Liu, Jun email: jun_liu@sutd.edu.sg organization: Singapore University of Technology and Design, Singapore |
BackLink | https://www.ncbi.nlm.nih.gov/pubmed/35700242$$D View this record in MEDLINE/PubMed |
BookMark | eNp9kE1r3DAQhkVJaTZp_0ALwZBLL96OJFuWcglLviGhpaS9ClkaFwXbSiQ7If--2uw2hxw6B41Azzsanj2yM4YRCflMYUkpqG-3P1Y3V0sGjC05lZxS9o4sGBVQKqbYDlkAFayUksldspfSHQCtauAfyC6vGwBWsQU5vpwHMxYrO_kwFj_Rhj-jf7mfxzAUv030YU7FqZlMcROc6fMjpqNildlHj08fyfvO9Ak_bfs--XV-dntyWV5_v7g6WV2XlgtZl60z0tFWcslq6rAVvJUAhlMlGlSdwcaBVVVn20p2AplqwVkEcC3L5Wq-T75u5t7H8DBjmvTgk8W-NyPmBTUTjVD54Cqjh2_QuzDHMW-nWdPwqq45iEwdbKm5HdDp--gHE5_1PzUZYBvAxpBSxO4VoaDX_vWLf732r7f-c0i-CVk_mbXPKRrf_z_6ZRP1iPj6l2pULYXifwEkb5A3 |
CODEN | ITPIDJ |
CitedBy_id | crossref_primary_10_1109_ACCESS_2024_3483896 crossref_primary_10_1109_TCSVT_2024_3455799 crossref_primary_10_1109_ACCESS_2024_3481631 crossref_primary_10_1049_cvi2_12253 crossref_primary_10_1109_TCE_2024_3361480 crossref_primary_10_3389_fnbot_2024_1502071 crossref_primary_10_1007_s11042_023_15443_5 crossref_primary_10_1016_j_iot_2024_101182 crossref_primary_10_1145_3678503 crossref_primary_10_1109_TMM_2022_3232034 crossref_primary_10_1016_j_neucom_2024_128426 crossref_primary_10_1109_ACCESS_2025_3551610 crossref_primary_10_3390_s23052422 crossref_primary_10_3389_fnins_2023_1225312 crossref_primary_10_3389_fncom_2024_1508297 crossref_primary_10_1007_s11265_023_01892_6 crossref_primary_10_1016_j_patcog_2024_110301 crossref_primary_10_1109_TPAMI_2024_3429387 crossref_primary_10_54097_h8wf8vah crossref_primary_10_1109_TMM_2024_3521767 crossref_primary_10_1016_j_neucom_2023_126903 crossref_primary_10_1109_TMC_2023_3325399 crossref_primary_10_1007_s40747_025_01811_1 crossref_primary_10_1109_ACCESS_2025_3546266 crossref_primary_10_1016_j_engappai_2024_107850 crossref_primary_10_1109_TIP_2023_3345153 crossref_primary_10_1109_ACCESS_2024_3392356 crossref_primary_10_1109_OJCOMS_2024_3373177 crossref_primary_10_3389_fnins_2024_1370024 crossref_primary_10_1109_ACCESS_2024_3366807 crossref_primary_10_1109_ACCESS_2024_3507272 crossref_primary_10_1038_s41597_024_03344_7 crossref_primary_10_1109_JIOT_2024_3394050 crossref_primary_10_1109_TIP_2024_3433581 crossref_primary_10_1109_TMC_2024_3501299 crossref_primary_10_1109_JSEN_2024_3443308 crossref_primary_10_1109_TCE_2024_3425610 crossref_primary_10_1109_TNNLS_2023_3291793 crossref_primary_10_1109_TII_2024_3450021 crossref_primary_10_1016_j_inffus_2024_102457 crossref_primary_10_1016_j_jvcir_2023_103960 crossref_primary_10_1016_j_asoc_2024_111344 crossref_primary_10_3390_electronics12010117 crossref_primary_10_1109_JIOT_2023_3314150 crossref_primary_10_1016_j_iot_2024_101487 crossref_primary_10_1109_ACCESS_2024_3389499 crossref_primary_10_1109_ACCESS_2024_3390004 crossref_primary_10_1016_j_neucom_2022_09_071 crossref_primary_10_1016_j_neucom_2023_127194 crossref_primary_10_1109_JSEN_2023_3347718 crossref_primary_10_1109_ACCESS_2025_3528880 crossref_primary_10_1007_s11042_022_14214_y crossref_primary_10_1109_ACCESS_2025_3547360 crossref_primary_10_1016_j_eswa_2024_125642 crossref_primary_10_1109_JIOT_2023_3330996 crossref_primary_10_1007_s00521_024_09630_0 crossref_primary_10_1109_JIOT_2024_3485614 crossref_primary_10_1109_TMM_2024_3521749 crossref_primary_10_1177_00491241241277524 crossref_primary_10_1088_2053_1583_ad6912 crossref_primary_10_1016_j_asoc_2024_112550 crossref_primary_10_1016_j_neucom_2024_127914 crossref_primary_10_1016_j_jvcir_2023_103959 crossref_primary_10_1016_j_neucom_2024_127882 crossref_primary_10_1109_TNNLS_2024_3360990 crossref_primary_10_1109_JMW_2025_3535525 crossref_primary_10_1109_LRA_2024_3455904 crossref_primary_10_1016_j_neucom_2024_127482 crossref_primary_10_1109_ACCESS_2023_3316032 crossref_primary_10_3934_era_2024129 crossref_primary_10_3390_s22145259 crossref_primary_10_1016_j_eswa_2024_126178 crossref_primary_10_15625_1813_9663_18043 crossref_primary_10_1016_j_neucom_2022_07_080 crossref_primary_10_1049_cvi2_12298 crossref_primary_10_1117_1_JEI_34_1_013056 crossref_primary_10_1016_j_neunet_2023_11_002 crossref_primary_10_1109_TIM_2024_3374286 crossref_primary_10_62486_latia20234 crossref_primary_10_1109_JSEN_2025_3529889 crossref_primary_10_1109_TIM_2024_3398094 crossref_primary_10_1109_TBC_2023_3332015 crossref_primary_10_1109_TSMC_2024_3377198 crossref_primary_10_1016_j_compbiomed_2023_106835 crossref_primary_10_1016_j_neucom_2023_127135 crossref_primary_10_1007_s11633_023_1487_8 crossref_primary_10_1109_JSEN_2023_3344789 crossref_primary_10_1145_3643553 crossref_primary_10_1016_j_neucom_2024_127496 crossref_primary_10_1016_j_jvcir_2024_104281 crossref_primary_10_1109_TCSVT_2023_3298978 crossref_primary_10_1109_TKDE_2024_3386794 crossref_primary_10_1109_RBME_2023_3296938 crossref_primary_10_1016_j_neucom_2024_128636 crossref_primary_10_1016_j_neucom_2024_127389 crossref_primary_10_1109_JIOT_2024_3394244 crossref_primary_10_1109_LRA_2024_3355752 crossref_primary_10_1007_s44196_024_00689_0 crossref_primary_10_1016_j_neucom_2025_129781 crossref_primary_10_1016_j_neucom_2023_03_070 crossref_primary_10_1109_TCSVT_2022_3217763 crossref_primary_10_3389_fnins_2023_1278584 crossref_primary_10_3390_s23020734 crossref_primary_10_1007_s11548_024_03158_3 crossref_primary_10_1016_j_neucom_2023_03_001 crossref_primary_10_1109_OJCS_2024_3406645 crossref_primary_10_1109_TPAMI_2024_3378753 crossref_primary_10_1088_2057_1976_ad17f9 crossref_primary_10_1109_JSEN_2024_3491183 crossref_primary_10_3233_AIC_220188 crossref_primary_10_1109_TCE_2024_3384974 crossref_primary_10_1016_j_displa_2023_102434 crossref_primary_10_1016_j_compbiomed_2023_107420 crossref_primary_10_1016_j_engappai_2023_106150 crossref_primary_10_1109_JIOT_2023_3240247 crossref_primary_10_1109_TIP_2023_3308750 crossref_primary_10_1109_LSP_2024_3525398 crossref_primary_10_1109_JSEN_2022_3225031 crossref_primary_10_3389_fninf_2024_1324981 crossref_primary_10_1109_JBHI_2023_3339703 crossref_primary_10_1109_LSENS_2024_3357941 crossref_primary_10_1016_j_cviu_2024_104013 crossref_primary_10_1109_JSEN_2024_3464416 crossref_primary_10_1109_JSEN_2024_3483835 crossref_primary_10_1109_JIOT_2023_3324791 crossref_primary_10_1109_TAP_2024_3484014 crossref_primary_10_1109_LAWP_2024_3397033 crossref_primary_10_1007_s10462_023_10650_w crossref_primary_10_1109_TMM_2023_3307933 crossref_primary_10_1016_j_eswa_2023_122314 crossref_primary_10_1109_TPAMI_2024_3367879 crossref_primary_10_3390_app12115408 |
Cites_doi | 10.1016/j.patrec.2009.11.017 10.1109/ICCV.2017.316 10.3115/v1/d14-1179 10.1109/ICASSP.2019.8683606 10.1109/ICCV.2015.304 10.1109/CVPR.2013.98 10.1109/CVPR.2016.291 10.1109/WACV45572.2020.9093274 10.1109/ICCV.2015.522 10.1109/LSENS.2018.2878572 10.1109/CVPRW.2017.207 10.1109/CVPR42600.2020.00118 10.1109/CVPR.2019.01232 10.1109/CVPR.2017.337 10.1007/978-3-319-46466-4_50 10.1007/978-3-030-58586-0_32 10.1109/CVPR.2019.00807 10.1109/TPAMI.2019.2896631 10.1109/CVPR.2004.1315191 10.1007/s11390-011-9430-9 10.1109/CVPR52688.2022.00298 10.1109/ICCV.2015.510 10.1145/3472722 10.1109/TMM.2018.2875510 10.1109/CVPR42600.2020.00043 10.1007/978-3-642-15552-9_29 10.1609/aaai.v34i03.5652 10.1109/ACPR.2015.7486569 10.1016/j.procs.2014.07.009 10.1109/ACCESS.2019.2923743 10.1109/ICPR48806.2021.9412632 10.3390/s19173680 10.1109/ICME.2018.8486486 10.1109/LSP.2016.2611485 10.1109/ICME.2019.00187 10.1007/s11263-021-01508-1 10.1007/978-3-319-46484-8_2 10.1145/2964284.2967191 10.1007/s11042-019-08576-z 10.1109/WACV48630.2021.00280 10.1109/TSMC.2018.2850149 10.1109/CVPR.2018.00054 10.1109/CVPR46437.2021.00193 10.1016/j.neucom.2018.10.095 10.1109/CVPR.2019.00371 10.1007/978-3-030-58555-6_40 10.1007/978-3-319-10605-2_48 10.1109/TPAMI.2019.2898954 10.1109/CVPR.2018.00871 10.1007/s00521-020-05144-7 10.1109/ICCV.2017.621 10.1016/j.patcog.2015.09.028 10.1109/FG47880.2020.00018 10.1109/CVPR46437.2021.00332 10.1016/j.bica.2013.05.008 10.3390/s16121990 10.1109/MCOM.2017.1700082 10.1109/FUTURETECH.2010.5482729 10.1016/j.sigpro.2014.08.038 10.1109/CVPR42600.2020.01271 10.1109/CVPR.2019.00319 10.1109/CVPR42600.2020.00067 10.1109/ACCESS.2020.3023599 10.1109/CVPR.2012.6248093 10.1109/ICCV.2017.73 10.1109/ICCV.2013.396 10.1109/WACV48630.2021.00089 10.1109/CVPR46437.2021.01302 10.1007/s11263-013-0636-x 10.1109/CVPR.2019.00810 10.1609/aaai.v32i1.12228 10.1109/CVPR46437.2021.00470 10.1109/TPAMI.2017.2691768 10.1007/s12652-019-01239-9 10.1109/AVSS.2017.8078497 10.1109/SMC.2015.263 10.1016/j.inffus.2022.03.001 10.1016/j.jvcir.2013.03.008 10.1145/3386252 10.1109/ICCV.2019.00092 10.1109/WACV45572.2020.9093307 10.1109/ICCV.2017.402 10.1109/ICPR48806.2021.9412991 10.1109/ICCV.2019.00875 10.1109/CVPR.2011.5995407 10.1007/978-3-030-01237-3_7 10.1145/3132734.3132739 10.1109/CVPR42600.2020.00580 10.1609/aaai.v33i01.33018989 10.3390/s16010115 10.1109/CVPR.2017.787 10.1007/978-3-319-46604-0_47 10.1007/s11263-005-1838-7 10.1109/CVPR.2015.7299101 10.1177/0278364913478446 10.1007/s11263-012-0550-7 10.1109/ICIP.2017.8296630 10.1109/CVPR46437.2021.01600 10.1109/CVPR.2019.00136 10.1145/1964897.1964918 10.1609/aaai.v31i1.11212 10.1109/TNNLS.2019.2935173 10.1007/s11042-021-11058-w 10.1109/CVPR42600.2020.01434 10.1007/978-981-15-4163-6_3 10.1109/ICCVW.2017.123 10.1007/978-3-030-58545-7_5 10.1109/ICMEW.2017.8026282 10.1109/TCYB.2017.2682280 10.1109/LSP.2017.2690339 10.1007/978-3-319-46487-9_50 10.1109/ICCVW.2011.6130379 10.1007/978-3-030-58571-6_6 10.1007/978-3-319-46448-0_31 10.1016/j.cviu.2006.07.013 10.1109/CVPR46437.2021.00966 10.1145/2789168.2790093 10.1109/JSAC.2017.2679658 10.1109/CVPR.2019.00137 10.1145/2733373.2806333 10.1109/CVPR.2017.243 10.1109/ICCV.2017.233 10.1109/CVPR.2019.01019 10.1109/TVT.2017.2737553 10.1109/CVPR42600.2020.00028 10.3390/app9040716 10.1109/ICCV.2015.460 10.1016/j.patcog.2017.10.037 10.1007/978-3-030-58621-8_25 10.1016/j.cogsys.2018.04.002 10.1109/CVPR42600.2020.00059 10.1145/3242587.3242609 10.1109/BSN.2006.6 10.1109/MSN48538.2019.00026 10.1145/2733373.2806222 10.1109/ICASSP.2017.7952132 10.1109/TPAMI.2021.3053765 10.1109/CVPR46437.2021.01025 10.1109/RADAR41533.2019.171307 10.1109/CVPR.2017.113 10.1109/CVPR.2016.213 10.1109/CVPR.2014.82 10.1007/978-3-642-33868-7_6 10.1145/2639108.2639143 10.1007/s11042-022-12091-z 10.1109/CVPR46437.2021.00887 10.24963/ijcai.2018/109 10.1109/COMPEM.2018.8496654 10.1145/1922649.1922653 10.3390/s19071644 10.1109/TCSVT.2016.2628339 10.1145/2671188.2749340 10.1109/CVPR.2017.168 10.1007/978-3-030-68238-5_48 10.1109/CVPR46437.2021.01301 10.1109/TCSVT.2018.2808682 10.1109/TPAMI.2017.2691321 10.1109/TPAMI.2007.70711 10.1109/CVPR.2018.00633 10.1109/DSW.2018.8439897 10.1109/TPAMI.2012.59 10.1007/s11263-021-01467-7 10.1609/aaai.v32i1.12328 10.1007/978-3-030-58583-9_26 10.1109/TIP.2020.3023597 10.1016/j.ins.2018.12.050 10.1109/ICCIA49625.2020.00028 10.1109/ICCVW.2019.00216 10.1145/3460426.3463643 10.1109/ICCV.2019.00718 10.1109/WACV.2014.6836044 10.1007/s11042-019-7404-z 10.1109/CVPR46437.2021.00411 10.3389/fnins.2016.00594 10.1109/CVPR.2015.7299152 10.1109/ICPR.2004.1334462 10.1109/ICCV.2019.00558 10.1109/ICACCI.2016.7732038 10.1007/978-3-030-01231-1_24 10.1016/j.patcog.2018.07.028 10.1007/978-3-642-25446-8_4 10.1109/JSEN.2019.2947446 10.1109/TIP.2021.3086590 10.1109/CVPR.2008.4587756 10.1007/978-3-030-20893-6_23 10.1109/CVPR.2016.333 10.1109/ICCV.2017.236 10.1109/TPAMI.2016.2533389 10.1109/AVSS.2019.8909840 10.1109/CVPR.2019.01230 10.1109/CVPR.2015.7298878 10.1109/CVPR.2019.01018 10.1109/CVPR42600.2020.01212 10.1109/ICPR.2014.604 10.1109/ICASSP.2016.7472168 10.1109/TIP.2020.2967577 10.1109/ICIP.2019.8802909 10.1109/TMM.2017.2749159 10.1007/978-3-030-58565-5_45 10.1109/ICCV.2019.00889 10.1109/CVPR.2015.7298714 10.1109/TPAMI.2018.2798607 10.1109/ACCESS.2017.2778011 10.1109/WACV.2017.24 10.1145/3314404 10.4108/icst.mobicase.2014.257786 10.1109/ICCVW.2019.00189 10.1109/CVPR46437.2021.01398 10.1109/TIP.2020.2965299 10.5244/C.24.97 10.1007/978-3-319-09396-3_9 10.1109/CVPR42600.2020.00022 10.1609/aaai.v33i01.33018561 10.1007/s11760-014-0677-9 10.1016/j.asoc.2017.09.027 10.1109/TMC.2018.2878233 10.1109/ICCV.2019.00631 10.24963/ijcai.2018/227 10.1145/3240508.3240675 10.1109/TPAMI.2016.2558148 10.1109/ICCVW.2017.77 10.1109/CVPR.2015.7299059 10.1145/2393347.2396381 10.1109/LSP.2018.2823910 10.1016/j.neucom.2016.05.094 10.1609/aaai.v33i01.33014683 10.1109/WACV.2017.27 10.1016/j.neucom.2020.05.118 10.1109/CVPR42600.2020.00119 10.1109/CVPR.2014.223 10.1109/JIOT.2020.2973272 10.1109/TIM.2021.3106101 10.1109/TPAMI.2017.2712608 10.1007/978-3-030-01231-1_39 10.1609/aaai.v30i1.10451 10.1109/WACV.2019.00199 10.1109/TPAMI.2019.2916873 10.1016/j.cviu.2017.10.011 10.1016/j.dsp.2019.01.013 10.1109/CVPR.2019.00804 10.1109/TPAMI.2019.2929038 10.3390/s20113305 10.1007/978-3-030-58539-6_17 10.1109/ICCV.2017.590 10.1109/CVPR.2011.5995316 10.1109/TPAMI.2017.2670560 10.1109/TPAMI.2017.2771306 10.1109/CBMI.2019.8877429 10.1109/JSEN.2018.2872849 10.1016/j.patcog.2020.107356 10.1109/CVPR.2018.00558 10.1109/ICCV.2019.00934 10.1109/IROS.2017.8206288 10.3390/rs11091068 10.1109/RADAR41533.2019.171243 10.1109/IROS45743.2020.9341699 10.1007/s11263-016-0982-6 10.1109/EMBC.2017.8037349 10.1109/ICCV.2017.115 10.1109/CVPRW.2012.6239233 10.1109/CVPR.2019.00132 10.1007/978-3-030-01234-2_21 10.3390/app7101101 10.1109/CVPR.2018.00572 10.1109/JSEN.2020.3028561 10.1109/IJCNN.2016.7727435 10.1109/ICCV.2011.6126543 10.1109/ICCV.2015.368 10.1109/CVPR.2017.502 10.1109/LSP.2018.2841649 10.1109/TCSVT.2020.3019293 10.1007/978-3-030-58558-7_31 10.1007/978-3-642-21257-4_36 10.1007/978-3-030-01216-8_43 10.1007/978-3-030-01246-5_7 10.1007/978-981-10-7895-8_32 10.1007/s10489-019-01603-4 10.1109/ICCA.2018.8444326 10.1109/CVPR42600.2020.00990 10.1016/j.infrared.2019.103014 10.1109/TMM.2018.2818329 10.1109/CVPR.2017.391 10.1016/j.patrec.2018.04.035 10.1109/CVPR.2015.7298708 10.23919/APSIPA.2018.8659539 10.1007/978-3-030-01225-0_18 10.1007/s11263-021-01531-2 10.1145/3338533.3366569 10.1109/CVPR.2017.604 10.1109/CVPR.2014.108 10.1109/TPAMI.2013.198 10.1109/TAES.2007.4441754 10.1117/12.2262719 10.1109/TMC.2020.3035045 10.1109/TVT.2016.2635161 10.1109/WACV.2017.30 10.1109/CVPR42600.2020.01047 10.1109/ICCV.2017.84 10.1109/WACV.2013.6474999 10.1007/978-3-030-01267-0_19 10.1109/TIP.2018.2812099 10.1109/WACV48630.2021.00278 10.1109/CVPR46437.2021.00078 10.1109/CVPRW.2017.44 10.1109/TPAMI.2015.2505295 10.1109/CVPR.2019.00034 10.1109/TMM.2018.2802648 10.1016/j.cviu.2013.01.013 10.1109/IJCNN48605.2020.9206681 10.1109/CVPR.2016.167 10.1109/WACV48630.2021.00381 10.1109/CVPR.2015.7299172 10.1109/TPAMI.2019.2901464 10.5555/3045118.3045336 10.1016/j.patcog.2019.107037 10.1109/TPAMI.2022.3157033 10.1109/CVPR.2018.00813 10.1109/CVPRW.2019.00217 10.1109/JSSC.2007.914337 10.1109/ISCAS45731.2020.9181247 10.1016/j.imavis.2021.104108 10.1109/JSEN.2019.2911204 10.1007/s11042-022-14075-5 10.14569/IJACSA.2019.0100311 10.2991/cnci-19.2019.95 10.1109/CVPR.2016.495 10.1109/CVPR.2018.00675 10.1109/CVPR.2014.339 10.1142/9789812833709_0030 10.1109/CVPR.2017.486 10.1109/IROS45743.2020.9341160 10.1109/CVPR.2016.115 10.1109/CVPR46437.2021.00471 10.1109/CVPR.2019.00033 10.1109/ICIP.2015.7350781 10.14722/ndss.2017.23023 10.1109/ICCV.2019.00756 10.1109/TIP.2017.2785279 10.1109/ICCV.2019.00630 10.1109/CVPR.2017.52 10.1016/j.patcog.2017.02.030 10.1109/CVPR.2019.00584 10.1109/CVPRW.2010.5543273 10.1109/ACPR.2017.136 10.1109/TCYB.2014.2347057 10.1109/CVPR42600.2020.00985 10.1109/TITS.2019.2935152 10.1109/LGRS.2015.2491329 10.1109/CVPR46437.2021.01103 10.1016/j.matpr.2020.09.052 10.1109/CVPR.2015.7298698 10.1109/CVPR.2017.781 10.1109/TIFS.2020.2985628 10.1016/j.patrec.2013.02.006 10.1109/CVPR42600.2020.00026 10.1109/CVPR.2018.00155 10.1109/CVPR.2017.387 10.1109/CVPR46437.2021.00748 10.1109/TVT.2018.2878754 10.1109/ICCV.2019.00096 10.1109/LSP.2017.2678539 10.1016/j.imavis.2009.11.014 10.1007/978-3-030-58577-8_2 10.1145/2393347.2396382 10.1109/34.910878 10.1109/WACV51458.2022.00073 10.1109/CVPRW.2019.00056 10.1109/CVPR.2010.5540234 10.1109/CVPR.2009.5206557 10.1109/CVPR.2016.297 10.1109/CVPR.2012.6247813 10.1109/TIP.2006.891352 10.1109/SIBGRAPI.2019.00011 10.1109/ICCV.2019.00289 10.1109/TPAMI.2017.2769085 10.1109/THMS.2015.2504550 10.1007/978-3-642-15567-3_11 10.1109/ISCAS.2008.4542023 10.1109/SIBGRAPI.2018.00019 10.1109/ICCV.2019.00559 10.1109/ICCV.2019.00272 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023 |
DBID | 97E RIA RIE AAYXX CITATION CGR CUY CVF ECM EIF NPM 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
DOI | 10.1109/TPAMI.2022.3183112 |
DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef Medline MEDLINE MEDLINE (Ovid) MEDLINE MEDLINE PubMed Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional MEDLINE - Academic |
DatabaseTitle | CrossRef MEDLINE Medline Complete MEDLINE with Full Text PubMed MEDLINE (Ovid) Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional MEDLINE - Academic |
DatabaseTitleList | MEDLINE MEDLINE - Academic Technology Research Database |
Database_xml | – sequence: 1 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: EIF name: MEDLINE url: https://proxy.k.utb.cz/login?url=https://www.webofscience.com/wos/medline/basic-search sourceTypes: Index Database – sequence: 3 dbid: RIE name: IEEE Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering Computer Science |
EISSN | 2160-9292 1939-3539 |
EndPage | 3225 |
ExternalDocumentID | 35700242 10_1109_TPAMI_2022_3183112 9795869 |
Genre | orig-research Research Support, Non-U.S. Gov't Journal Article Review |
GrantInformation_xml | – fundername: TAILOR – fundername: SUTD SRG – fundername: EU Horizon 2020 research and innovation programme grantid: 952215 – fundername: National Research Foundation, Singapore grantid: AISG-100E-2020-065 |
GroupedDBID | --- -DZ -~X .DC 0R~ 29I 4.4 53G 5GY 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACGFS ACIWK ACNCT AENEX AGQYO AHBIQ AKJIK AKQYR ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 E.L EBS EJD F5P HZ~ IEDLZ IFIPE IPLJI JAVBF LAI M43 MS~ O9- OCL P2P PQQKQ RIA RIE RNS RXW TAE TN5 UHB ~02 5VS 9M8 AAYOK AAYXX ABFSI ADRHT AETEA AETIX AGSQL AI. AIBXA ALLEH CITATION FA8 H~9 IBMZZ ICLAB IFJZH RIG RNI RZB VH1 XJT CGR CUY CVF ECM EIF NPM PKN RIC Z5M 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
ID | FETCH-LOGICAL-c3685-bda8d1b838251deb63b800a31967e9fae7d0c94fcb48f6e29b0dce00db2222d53 |
IEDL.DBID | RIE |
ISSN | 0162-8828 1939-3539 |
IngestDate | Fri Jul 11 10:13:15 EDT 2025 Sun Jun 29 16:00:02 EDT 2025 Wed Feb 19 02:23:57 EST 2025 Thu Apr 24 23:02:07 EDT 2025 Tue Jul 01 01:43:04 EDT 2025 Wed Aug 27 02:47:54 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 3 |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c3685-bda8d1b838251deb63b800a31967e9fae7d0c94fcb48f6e29b0dce00db2222d53 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 ObjectType-Article-2 ObjectType-Feature-3 content type line 23 ObjectType-Review-1 |
ORCID | 0000-0002-1816-1457 0000-0002-4365-4165 0000-0001-9998-3614 0000-0002-8568-2121 0000-0002-6603-3257 0000-0003-1920-0371 |
PMID | 35700242 |
PQID | 2773455306 |
PQPubID | 85458 |
PageCount | 26 |
ParticipantIDs | crossref_primary_10_1109_TPAMI_2022_3183112 proquest_miscellaneous_2676926739 pubmed_primary_35700242 crossref_citationtrail_10_1109_TPAMI_2022_3183112 proquest_journals_2773455306 ieee_primary_9795869 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2023-03-01 |
PublicationDateYYYYMMDD | 2023-03-01 |
PublicationDate_xml | – month: 03 year: 2023 text: 2023-03-01 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | United States |
PublicationPlace_xml | – name: United States – name: New York |
PublicationTitle | IEEE transactions on pattern analysis and machine intelligence |
PublicationTitleAbbrev | TPAMI |
PublicationTitleAlternate | IEEE Trans Pattern Anal Mach Intell |
PublicationYear | 2023 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | ref57 ref207 ref328 ref56 ref208 ref329 ref59 ref205 ref326 ref58 ref206 ref327 Korbar (ref369) ref53 ref203 ref324 ref52 ref204 ref325 ref55 ref201 Qi (ref254) ref322 ref54 ref202 ref323 Sung (ref388) Shah (ref229) ref209 Fan (ref247) ref210 ref331 ref211 ref332 ref51 ref50 ref330 ref46 ref218 Trommel (ref281) ref339 ref45 ref219 ref48 ref216 ref337 ref47 ref217 ref338 ref42 ref214 ref335 ref41 ref215 ref336 ref212 ref333 ref43 ref213 ref334 Feichtenhofer (ref113) 2016 ref49 Chung (ref92) 2014 ref100 ref221 ref342 ref101 ref222 ref343 ref40 ref340 ref220 ref341 ref35 ref306 ref34 ref307 ref37 ref304 ref425 ref36 ref305 ref31 ref302 ref423 ref30 ref303 ref424 ref33 ref300 ref421 ref32 ref301 ref422 ref39 ref38 ref308 ref309 Goyal (ref381) ref310 ref24 ref317 ref23 Girdhar (ref81) ref318 ref26 ref315 ref25 ref20 ref313 ref314 ref22 ref311 ref21 ref312 ref28 ref27 ref319 ref29 ref320 ref200 ref321 Wang (ref62) 2015 ref128 ref249 ref129 ref97 ref126 ref368 ref96 ref127 ref248 ref99 ref124 ref245 ref98 ref125 ref246 ref93 ref133 ref134 ref255 ref95 ref131 ref252 ref94 ref253 ref371 ref130 ref251 ref372 ref91 ref90 ref370 ref139 ref86 ref137 ref258 ref85 ref138 ref259 ref88 ref135 ref256 ref136 Gorban (ref379) 2015 ref82 ref144 ref265 ref386 ref145 ref387 ref84 ref263 ref83 ref143 ref264 ref385 ref140 ref261 ref382 ref141 ref262 ref383 ref80 ref380 ref260 ref79 ref108 ref109 ref106 ref227 ref348 ref107 ref228 ref75 ref104 ref225 ref346 ref74 ref105 ref226 ref347 ref77 ref223 ref344 ref76 ref103 ref224 ref345 ref71 ref111 ref232 ref353 ref70 ref112 ref233 ref73 Shi (ref349) ref351 ref110 ref231 ref352 ref350 ref68 ref119 ref67 ref117 ref238 ref359 ref69 ref118 ref239 ref64 ref115 ref236 ref357 ref63 ref116 ref237 ref358 ref66 ref234 ref355 ref65 ref114 ref235 ref356 Diba (ref102) 2017 ref60 ref122 ref243 ref364 ref123 ref244 ref365 ref120 ref241 ref362 ref61 ref121 ref242 ref363 Zhang (ref142) ref360 ref240 ref361 ref168 ref169 ref290 ref170 ref291 Carreira (ref376) 2018 ref298 ref178 ref299 ref175 ref296 ref176 ref297 ref173 ref294 ref174 ref295 ref171 ref292 ref172 ref293 ref179 Damen (ref378) ref180 ref181 ref188 ref189 ref186 ref187 ref184 ref185 ref182 ref183 ref148 ref269 ref149 ref146 ref267 ref147 ref268 ref389 ref390 ref155 ref276 ref397 ref156 ref277 ref398 ref153 ref274 ref395 ref154 ref275 ref396 ref151 ref272 ref393 ref152 ref273 ref394 ref270 ref391 ref150 ref271 ref392 ref159 ref157 ref278 ref399 ref158 Yang (ref289) 2019 ref280 ref166 ref287 ref167 ref288 ref164 ref285 ref165 ref286 ref162 ref283 ref163 ref284 ref160 ref161 ref282 ref2 ref1 ref191 ref192 ref190 ref199 ref197 ref198 ref195 ref196 ref193 ref194 Baradel (ref316) 2017 Carreira (ref377) 2019 Ballas (ref87) Alwassel (ref367) 2019 Berner (ref257) Miech (ref413) 2020 ref8 ref7 ref9 ref4 ref3 ref6 ref5 Liu (ref177) Stove (ref279) Ghosh (ref12) 2019 Li (ref132) Triantaphyllou (ref354) 1998; 15 Meglouli (ref230) 2019; 21 Soomro (ref374) 2012 Srivastava (ref72) Wang (ref16) 2019 Xiao (ref266) 2020 Kay (ref375) 2017 Devlin (ref373) 2018 Huang (ref250) 2020 Dwibedi (ref89) Ren (ref18) 2020 Müller (ref384) 2007 ref13 ref405 ref406 ref15 ref403 ref14 ref404 ref402 ref11 ref10 ref400 ref17 ref409 ref19 ref407 ref408 Simonyan (ref44) ref416 ref417 ref414 ref415 ref412 ref410 ref411 ref418 ref419 ref420 Hinton (ref366) Sharma (ref78) 2015 Abu-El-Haija (ref401) 2016 |
References_xml | – ident: ref216 doi: 10.1016/j.patrec.2009.11.017 – ident: ref313 doi: 10.1109/ICCV.2017.316 – ident: ref91 doi: 10.3115/v1/d14-1179 – ident: ref362 doi: 10.1109/ICASSP.2019.8683606 – ident: ref226 doi: 10.1109/ICCV.2015.304 – ident: ref219 doi: 10.1109/CVPR.2013.98 – ident: ref59 doi: 10.1109/CVPR.2016.291 – ident: ref124 doi: 10.1109/WACV45572.2020.9093274 – ident: ref129 doi: 10.1109/ICCV.2015.522 – ident: ref336 doi: 10.1109/LSENS.2018.2878572 – ident: ref168 doi: 10.1109/CVPRW.2017.207 – ident: ref135 doi: 10.1109/CVPR42600.2020.00118 – ident: ref144 doi: 10.1109/CVPR.2019.01232 – ident: ref51 doi: 10.1109/CVPR.2017.337 – ident: ref145 doi: 10.1007/978-3-319-46466-4_50 – year: 2018 ident: ref373 article-title: BERT: Pre-training of deep bidirectional transformers for language understanding – year: 2015 ident: ref379 article-title: THUMOS challenge: Action recognition with a large number of classes – ident: ref191 doi: 10.1007/978-3-030-58586-0_32 – year: 2017 ident: ref316 article-title: Pose-conditioned spatio-temporal attention for human action recognition – year: 2019 ident: ref289 article-title: Action recognition using indoor radar systems – volume: 15 start-page: 175 issue: 1998 year: 1998 ident: ref354 article-title: Multi-criteria decision making: An operations research approach publication-title: Encyclopedia Elect. Electron. Eng. – ident: ref125 doi: 10.1109/CVPR.2019.00807 – ident: ref179 doi: 10.1109/TPAMI.2019.2896631 – ident: ref215 doi: 10.1109/CVPR.2004.1315191 – start-page: 568 volume-title: Proc. 27th Int. Conf. Neural Informat. Process. Syst. ident: ref44 article-title: Two-stream convolutional networks for action recognition in videos – ident: ref301 doi: 10.1007/s11390-011-9430-9 – ident: ref321 doi: 10.1109/CVPR52688.2022.00298 – ident: ref66 doi: 10.1109/ICCV.2015.510 – ident: ref183 doi: 10.1145/3472722 – ident: ref356 doi: 10.1109/TMM.2018.2875510 – start-page: C186 volume-title: Proc. Symp. VLSI Circuits ident: ref257 article-title: A 240 × 180 10mw 12us latency sparse-output vision sensor for mobile applications – ident: ref127 doi: 10.1109/CVPR42600.2020.00043 – ident: ref387 doi: 10.1007/978-3-642-15552-9_29 – ident: ref194 doi: 10.1609/aaai.v34i03.5652 – ident: ref208 doi: 10.1109/ACPR.2015.7486569 – ident: ref268 doi: 10.1016/j.procs.2014.07.009 – ident: ref409 doi: 10.1109/ACCESS.2019.2923743 – ident: ref236 doi: 10.1109/ICPR48806.2021.9412632 – ident: ref338 doi: 10.3390/s19173680 – ident: ref317 doi: 10.1109/ICME.2018.8486486 – ident: ref96 doi: 10.1109/LSP.2016.2611485 – ident: ref211 doi: 10.1109/ICME.2019.00187 – ident: ref73 doi: 10.1007/s11263-021-01508-1 – start-page: 5843 volume-title: Proc. IEEE/CVF Int. Conf. Comput. Vis. ident: ref381 article-title: The “something something – ident: ref50 doi: 10.1007/978-3-319-46484-8_2 – ident: ref170 doi: 10.1145/2964284.2967191 – ident: ref312 doi: 10.1007/s11042-019-08576-z – ident: ref358 doi: 10.1109/WACV48630.2021.00280 – ident: ref324 doi: 10.1109/TSMC.2018.2850149 – ident: ref110 doi: 10.1109/CVPR.2018.00054 – ident: ref136 doi: 10.1109/CVPR46437.2021.00193 – ident: ref74 doi: 10.1016/j.neucom.2018.10.095 – ident: ref193 doi: 10.1109/CVPR.2019.00371 – ident: ref143 doi: 10.1007/978-3-030-58555-6_40 – ident: ref253 doi: 10.1007/978-3-319-10605-2_48 – ident: ref350 doi: 10.1109/TPAMI.2019.2898954 – ident: ref419 doi: 10.1109/CVPR.2018.00871 – ident: ref84 doi: 10.1007/s00521-020-05144-7 – ident: ref323 doi: 10.1109/ICCV.2017.621 – ident: ref31 doi: 10.1016/j.patcog.2015.09.028 – ident: ref305 doi: 10.1109/FG47880.2020.00018 – ident: ref139 doi: 10.1109/CVPR46437.2021.00332 – ident: ref393 doi: 10.1016/j.bica.2013.05.008 – ident: ref280 doi: 10.3390/s16121990 – ident: ref21 doi: 10.1109/MCOM.2017.1700082 – ident: ref269 doi: 10.1109/FUTURETECH.2010.5482729 – ident: ref399 doi: 10.1016/j.sigpro.2014.08.038 – ident: ref417 doi: 10.1109/CVPR42600.2020.01271 – ident: ref255 doi: 10.1109/CVPR.2019.00319 – ident: ref119 doi: 10.1109/CVPR42600.2020.00067 – ident: ref333 doi: 10.1109/ACCESS.2020.3023599 – ident: ref40 doi: 10.1109/CVPR.2012.6248093 – ident: ref368 doi: 10.1109/ICCV.2017.73 – start-page: 1 volume-title: Proc. Int. Conf. Learn. Representations ident: ref247 article-title: PSTNet: Point spatio-temporal convolution on point cloud sequences – issue: CG-2007-2 year: 2007 ident: ref384 article-title: Documentation mocap database HDM05 – ident: ref394 doi: 10.1109/ICCV.2013.396 – ident: ref412 doi: 10.1109/WACV48630.2021.00089 – ident: ref420 doi: 10.1109/CVPR46437.2021.01302 – ident: ref48 doi: 10.1007/s11263-013-0636-x – ident: ref187 doi: 10.1109/CVPR.2019.00810 – ident: ref58 doi: 10.1609/aaai.v32i1.12228 – ident: ref128 doi: 10.1109/CVPR46437.2021.00470 – ident: ref123 doi: 10.1109/TPAMI.2017.2691768 – ident: ref235 doi: 10.1007/s12652-019-01239-9 – ident: ref228 doi: 10.1109/AVSS.2017.8078497 – ident: ref271 doi: 10.1109/SMC.2015.263 – ident: ref335 doi: 10.1016/j.inffus.2022.03.001 – ident: ref397 doi: 10.1016/j.jvcir.2013.03.008 – ident: ref421 doi: 10.1145/3386252 – ident: ref318 doi: 10.1109/ICCV.2019.00092 – ident: ref365 doi: 10.1109/WACV45572.2020.9093307 – ident: ref67 doi: 10.1109/ICCV.2017.402 – ident: ref252 doi: 10.1109/ICPR48806.2021.9412991 – ident: ref299 doi: 10.1109/ICCV.2019.00875 – start-page: 149 volume-title: Proc. 1st Eur. Radar Conf. ident: ref279 article-title: Modern FMCW radar-techniques and applications – ident: ref43 doi: 10.1109/CVPR.2011.5995407 – ident: ref357 doi: 10.1007/978-3-030-01237-3_7 – ident: ref28 doi: 10.1145/3132734.3132739 – ident: ref243 doi: 10.1109/CVPR42600.2020.00580 – ident: ref202 doi: 10.1609/aaai.v33i01.33018989 – ident: ref343 doi: 10.3390/s16010115 – ident: ref53 doi: 10.1109/CVPR.2017.787 – ident: ref224 doi: 10.1007/978-3-319-46604-0_47 – ident: ref42 doi: 10.1007/s11263-005-1838-7 – ident: ref71 doi: 10.1109/CVPR.2015.7299101 – ident: ref392 doi: 10.1177/0278364913478446 – ident: ref395 doi: 10.1007/s11263-012-0550-7 – ident: ref363 doi: 10.1109/ICIP.2017.8296630 – ident: ref416 doi: 10.1109/CVPR46437.2021.01600 – ident: ref126 doi: 10.1109/CVPR.2019.00136 – start-page: 1 volume-title: Proc. NeurIPS Deep Learn. Representation Learn. Workshop ident: ref366 article-title: Distilling the knowledge in a neural network – ident: ref34 doi: 10.1145/1964897.1964918 – ident: ref165 doi: 10.1609/aaai.v31i1.11212 – ident: ref189 doi: 10.1109/TNNLS.2019.2935173 – year: 2014 ident: ref92 article-title: Empirical evaluation of gated recurrent neural networks on sequence modeling – ident: ref341 doi: 10.1007/s11042-021-11058-w – ident: ref195 doi: 10.1109/CVPR42600.2020.01434 – ident: ref287 doi: 10.1007/978-981-15-4163-6_3 – ident: ref56 doi: 10.1109/ICCVW.2017.123 – ident: ref320 doi: 10.1007/978-3-030-58545-7_5 – ident: ref181 doi: 10.1109/ICMEW.2017.8026282 – ident: ref355 doi: 10.1109/TCYB.2017.2682280 – ident: ref180 doi: 10.1109/LSP.2017.2690339 – ident: ref162 doi: 10.1007/978-3-319-46487-9_50 – ident: ref29 doi: 10.1109/ICCVW.2011.6130379 – ident: ref70 doi: 10.1007/978-3-030-58571-6_6 – ident: ref400 doi: 10.1007/978-3-319-46448-0_31 – ident: ref383 doi: 10.1016/j.cviu.2006.07.013 – ident: ref425 doi: 10.1109/CVPR46437.2021.00966 – ident: ref35 doi: 10.1145/2789168.2790093 – ident: ref293 doi: 10.1109/JSAC.2017.2679658 – start-page: 10 volume-title: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. Workshops ident: ref177 article-title: Skepxels: Spatio-temporal image representation of human skeleton joints for action recognition – ident: ref83 doi: 10.1109/CVPR.2019.00137 – volume-title: Proc. Int. Conf. Learn. Representations ident: ref142 article-title: V4D: 4D convolutional neural networks for video-level representation learning – ident: ref342 doi: 10.1145/2733373.2806333 – ident: ref103 doi: 10.1109/CVPR.2017.243 – ident: ref206 doi: 10.1109/ICCV.2017.233 – year: 2017 ident: ref375 article-title: The kinetics human action video dataset – ident: ref80 doi: 10.1109/CVPR.2019.01019 – year: 2019 ident: ref16 article-title: Temporal unet: Sample level human action recognition using WiFi – ident: ref298 doi: 10.1109/TVT.2017.2737553 – ident: ref147 doi: 10.1109/CVPR42600.2020.00028 – ident: ref325 doi: 10.3390/app9040716 – ident: ref159 doi: 10.1109/ICCV.2015.460 – ident: ref90 doi: 10.1016/j.patcog.2017.10.037 – ident: ref418 doi: 10.1007/978-3-030-58621-8_25 – ident: ref227 doi: 10.1016/j.cogsys.2018.04.002 – year: 2019 ident: ref377 article-title: A short note on the kinetics-700 human action dataset – ident: ref11 doi: 10.1109/CVPR42600.2020.00059 – ident: ref262 doi: 10.1145/3242587.3242609 – ident: ref270 doi: 10.1109/BSN.2006.6 – ident: ref295 doi: 10.1109/MSN48538.2019.00026 – ident: ref93 doi: 10.1145/2733373.2806222 – ident: ref263 doi: 10.1109/ICASSP.2017.7952132 – ident: ref205 doi: 10.1109/TPAMI.2021.3053765 – ident: ref424 doi: 10.1109/CVPR46437.2021.01025 – ident: ref284 doi: 10.1109/RADAR41533.2019.171307 – ident: ref173 doi: 10.1109/CVPR.2017.113 – ident: ref65 doi: 10.1109/CVPR.2016.213 – year: 2020 ident: ref18 article-title: A survey on 3D skeleton-based action recognition using learning method – ident: ref154 doi: 10.1109/CVPR.2014.82 – ident: ref389 doi: 10.1007/978-3-642-33868-7_6 – ident: ref292 doi: 10.1145/2639108.2639143 – year: 2017 ident: ref102 article-title: Temporal 3D convnets: New architecture and transfer learning for video classification – ident: ref222 doi: 10.1007/s11042-022-12091-z – ident: ref140 doi: 10.1109/CVPR46437.2021.00887 – ident: ref174 doi: 10.24963/ijcai.2018/109 – ident: ref283 doi: 10.1109/COMPEM.2018.8496654 – year: 2012 ident: ref374 article-title: UCF101: A dataset of 101 human actions classes from videos in the wild – ident: ref17 doi: 10.1145/1922649.1922653 – ident: ref277 doi: 10.3390/s19071644 – ident: ref169 doi: 10.1109/TCSVT.2016.2628339 – year: 2020 ident: ref413 article-title: RareAct: A video dataset of unusual interactions – ident: ref331 doi: 10.1145/2671188.2749340 – ident: ref52 doi: 10.1109/CVPR.2017.168 – ident: ref148 doi: 10.1007/978-3-030-68238-5_48 – volume-title: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. Workshops ident: ref89 article-title: Temporal reasoning in videos using convolutional gated recurrent units – ident: ref137 doi: 10.1109/CVPR46437.2021.01301 – ident: ref214 doi: 10.1109/TCSVT.2018.2808682 – ident: ref327 doi: 10.1109/TPAMI.2017.2691321 – ident: ref41 doi: 10.1109/TPAMI.2007.70711 – ident: ref402 doi: 10.1109/CVPR.2018.00633 – volume: 21 start-page: 43 issue: 4 year: 2019 ident: ref230 article-title: A new technique based on 3D convolutional neural networks and filtering optical flow maps for action classification in infrared video publication-title: Control Eng. Appl. Inf. – ident: ref185 doi: 10.1109/DSW.2018.8439897 – ident: ref99 doi: 10.1109/TPAMI.2012.59 – ident: ref122 doi: 10.1007/s11263-021-01467-7 – ident: ref152 doi: 10.1609/aaai.v32i1.12328 – ident: ref97 doi: 10.1007/978-3-030-58583-9_26 – ident: ref261 doi: 10.1109/TIP.2020.3023597 – ident: ref57 doi: 10.1016/j.ins.2018.12.050 – ident: ref77 doi: 10.1109/ICCIA49625.2020.00028 – ident: ref197 doi: 10.1109/ICCVW.2019.00216 – ident: ref422 doi: 10.1145/3460426.3463643 – ident: ref134 doi: 10.1109/ICCV.2019.00718 – ident: ref351 doi: 10.1109/WACV.2014.6836044 – ident: ref79 doi: 10.1007/s11042-019-7404-z – ident: ref121 doi: 10.1109/CVPR46437.2021.00411 – ident: ref259 doi: 10.3389/fnins.2016.00594 – ident: ref225 doi: 10.1109/CVPR.2015.7299152 – ident: ref382 doi: 10.1109/ICPR.2004.1334462 – ident: ref117 doi: 10.1109/ICCV.2019.00558 – ident: ref307 doi: 10.1109/ICACCI.2016.7732038 – ident: ref361 doi: 10.1007/978-3-030-01231-1_24 – year: 2016 ident: ref401 article-title: YouTube-8M: A large-scale video classification benchmark – ident: ref133 doi: 10.1016/j.patcog.2018.07.028 – ident: ref95 doi: 10.1007/978-3-642-25446-8_4 – ident: ref339 doi: 10.1109/JSEN.2019.2947446 – start-page: 81 volume-title: Proc. Eur. Radar Conf. ident: ref281 article-title: Multi-target human gait classification using deep convolutional neural networks on micro-doppler spectrograms – start-page: 5105 volume-title: Proc. 27th Int. Conf. Neural Informat. Process. Syst. ident: ref254 article-title: PointNet++: Deep hierarchical feature learning on point sets in a metric space – year: 2015 ident: ref62 article-title: Towards good practices for very deep two-stream convnets – ident: ref364 doi: 10.1109/TIP.2021.3086590 – ident: ref385 doi: 10.1109/CVPR.2008.4587756 – ident: ref111 doi: 10.1007/978-3-030-20893-6_23 – ident: ref24 doi: 10.1109/CVPR.2016.333 – ident: ref68 doi: 10.1109/ICCV.2017.236 – ident: ref239 doi: 10.1109/TPAMI.2016.2533389 – ident: ref175 doi: 10.1109/AVSS.2019.8909840 – ident: ref196 doi: 10.1109/CVPR.2019.01230 – ident: ref7 doi: 10.1109/CVPR.2015.7298878 – ident: ref61 doi: 10.1109/CVPR.2019.01018 – ident: ref120 doi: 10.1109/CVPR42600.2020.01212 – ident: ref398 doi: 10.1109/ICPR.2014.604 – ident: ref4 doi: 10.1109/ICASSP.2016.7472168 – ident: ref26 doi: 10.1109/TIP.2020.2967577 – ident: ref360 doi: 10.1109/ICIP.2019.8802909 – ident: ref98 doi: 10.1109/TMM.2017.2749159 – ident: ref190 doi: 10.1007/978-3-030-58565-5_45 – year: 2019 ident: ref12 article-title: Spatiotemporal filtering for event-based action recognition – ident: ref64 doi: 10.1109/ICCV.2019.00889 – ident: ref155 doi: 10.1109/CVPR.2015.7298714 – ident: ref306 doi: 10.1109/TPAMI.2018.2798607 – ident: ref75 doi: 10.1109/ACCESS.2017.2778011 – ident: ref156 doi: 10.1109/WACV.2017.24 – ident: ref13 doi: 10.1145/3314404 – ident: ref14 doi: 10.4108/icst.mobicase.2014.257786 – ident: ref82 doi: 10.1109/ICCVW.2019.00189 – ident: ref246 doi: 10.1109/CVPR46437.2021.01398 – ident: ref310 doi: 10.1109/TIP.2020.2965299 – ident: ref38 doi: 10.5244/C.24.97 – ident: ref36 doi: 10.1007/978-3-319-09396-3_9 – ident: ref199 doi: 10.1109/CVPR42600.2020.00022 – ident: ref198 doi: 10.1609/aaai.v33i01.33018561 – ident: ref232 doi: 10.1007/s11760-014-0677-9 – volume-title: Proc. Int. Conf. Learn. Representations ident: ref132 article-title: CT-net: Channel tensorization network for video classification – ident: ref273 doi: 10.1016/j.asoc.2017.09.027 – ident: ref297 doi: 10.1109/TMC.2018.2878233 – ident: ref309 doi: 10.1109/ICCV.2019.00631 – ident: ref207 doi: 10.24963/ijcai.2018/227 – start-page: 33 volume-title: Proc. 27th Int. Conf. Neural Informat. Process. Syst. ident: ref81 article-title: Attentional pooling for action recognition – ident: ref405 doi: 10.1145/3240508.3240675 – ident: ref55 doi: 10.1109/TPAMI.2016.2558148 – ident: ref314 doi: 10.1109/ICCVW.2017.77 – ident: ref47 doi: 10.1109/CVPR.2015.7299059 – ident: ref390 doi: 10.1145/2393347.2396381 – ident: ref231 doi: 10.1109/LSP.2018.2823910 – ident: ref30 doi: 10.1016/j.neucom.2016.05.094 – ident: ref146 doi: 10.1609/aaai.v33i01.33014683 – ident: ref94 doi: 10.1109/WACV.2017.27 – ident: ref76 doi: 10.1016/j.neucom.2020.05.118 – ident: ref200 doi: 10.1109/CVPR42600.2020.00119 – ident: ref45 doi: 10.1109/CVPR.2014.223 – ident: ref296 doi: 10.1109/JIOT.2020.2973272 – ident: ref244 doi: 10.1109/TIM.2021.3106101 – ident: ref105 doi: 10.1109/TPAMI.2017.2712608 – ident: ref264 doi: 10.1007/978-3-030-01231-1_39 – start-page: 843 volume-title: Proc. 32nd Int. Conf. Int. Conf. Mach. Learn. ident: ref72 article-title: Unsupervised learning of video representations using LSTMs – ident: ref160 doi: 10.1609/aaai.v30i1.10451 – ident: ref249 doi: 10.1109/WACV.2019.00199 – ident: ref151 doi: 10.1109/TPAMI.2019.2916873 – ident: ref85 doi: 10.1016/j.cviu.2017.10.011 – ident: ref288 doi: 10.1016/j.dsp.2019.01.013 – ident: ref69 doi: 10.1109/CVPR.2019.00804 – ident: ref359 doi: 10.1109/TPAMI.2019.2929038 – start-page: 249 volume-title: Proc. Opt. Photon. Informat. Process. XII ident: ref229 article-title: A spatio-temporal deep learning approach for human action recognition in infrared videos – ident: ref311 doi: 10.3390/s20113305 – ident: ref101 doi: 10.1007/978-3-030-58539-6_17 – ident: ref130 doi: 10.1109/ICCV.2017.590 – ident: ref5 doi: 10.1109/CVPR.2011.5995316 – ident: ref403 doi: 10.1109/TPAMI.2017.2670560 – ident: ref8 doi: 10.1109/TPAMI.2017.2771306 – ident: ref407 doi: 10.1109/CBMI.2019.8877429 – ident: ref285 doi: 10.1109/JSEN.2018.2872849 – ident: ref319 doi: 10.1016/j.patcog.2020.107356 – ident: ref188 doi: 10.1109/CVPR.2018.00558 – ident: ref242 doi: 10.1109/ICCV.2019.00934 – ident: ref315 doi: 10.1109/IROS.2017.8206288 – ident: ref22 doi: 10.3390/rs11091068 – ident: ref286 doi: 10.1109/RADAR41533.2019.171243 – ident: ref345 doi: 10.1109/IROS45743.2020.9341699 – ident: ref347 doi: 10.1007/s11263-016-0982-6 – ident: ref272 doi: 10.1109/EMBC.2017.8037349 – ident: ref166 doi: 10.1109/ICCV.2017.115 – ident: ref391 doi: 10.1109/CVPRW.2012.6239233 – ident: ref201 doi: 10.1109/CVPR.2019.00132 – ident: ref328 doi: 10.1007/978-3-030-01234-2_21 – ident: ref404 doi: 10.3390/app7101101 – ident: ref167 doi: 10.1109/CVPR.2018.00572 – start-page: 47 volume-title: Proc. AAAI Conf. Artif. Intell. ident: ref388 article-title: Human activity detection from RGBD images – ident: ref340 doi: 10.1109/JSEN.2020.3028561 – ident: ref334 doi: 10.1109/IJCNN.2016.7727435 – ident: ref27 doi: 10.1109/ICCV.2011.6126543 – ident: ref49 doi: 10.1109/ICCV.2015.368 – ident: ref108 doi: 10.1109/CVPR.2017.502 – ident: ref210 doi: 10.1109/LSP.2018.2841649 – ident: ref212 doi: 10.1109/TCSVT.2020.3019293 – ident: ref100 doi: 10.1007/978-3-030-58558-7_31 – year: 2020 ident: ref266 article-title: Audiovisual slowfast networks for video recognition – ident: ref267 doi: 10.1007/978-3-642-21257-4_36 – ident: ref115 doi: 10.1007/978-3-030-01216-8_43 – ident: ref186 doi: 10.1007/978-3-030-01246-5_7 – ident: ref330 doi: 10.1007/978-981-10-7895-8_32 – start-page: 7774 volume-title: Proc. 27th Int. Conf. Neural Informat. Process. Syst. ident: ref369 article-title: Cooperative learning of audio and video models from self-supervised synchronization – ident: ref2 doi: 10.1007/s10489-019-01603-4 – year: 2015 ident: ref78 article-title: Action recognition using visual attention – ident: ref337 doi: 10.1109/ICCA.2018.8444326 – ident: ref371 doi: 10.1109/CVPR42600.2020.00990 – ident: ref234 doi: 10.1016/j.infrared.2019.103014 – ident: ref217 doi: 10.1109/TMM.2018.2818329 – ident: ref163 doi: 10.1109/CVPR.2017.391 – ident: ref329 doi: 10.1016/j.patrec.2018.04.035 – ident: ref346 doi: 10.1109/CVPR.2015.7298708 – ident: ref332 doi: 10.23919/APSIPA.2018.8659539 – ident: ref104 doi: 10.1007/978-3-030-01225-0_18 – ident: ref410 doi: 10.1007/s11263-021-01531-2 – ident: ref182 doi: 10.1145/3338533.3366569 – ident: ref63 doi: 10.1109/CVPR.2017.604 – ident: ref218 doi: 10.1109/CVPR.2014.108 – ident: ref153 doi: 10.1109/TPAMI.2013.198 – ident: ref278 doi: 10.1109/TAES.2007.4441754 – ident: ref282 doi: 10.1117/12.2262719 – ident: ref304 doi: 10.1109/TMC.2020.3035045 – ident: ref294 doi: 10.1109/TVT.2016.2635161 – ident: ref233 doi: 10.1109/WACV.2017.30 – ident: ref265 doi: 10.1109/CVPR42600.2020.01047 – ident: ref88 doi: 10.1109/ICCV.2017.84 – year: 2016 ident: ref113 article-title: Spatiotemporal residual networks for video action recognition – ident: ref33 doi: 10.1109/WACV.2013.6474999 – ident: ref131 doi: 10.1007/978-3-030-01267-0_19 – ident: ref209 doi: 10.1109/TIP.2018.2812099 – ident: ref322 doi: 10.1109/WACV48630.2021.00278 – ident: ref414 doi: 10.1109/CVPR46437.2021.00078 – ident: ref10 doi: 10.1109/CVPRW.2017.44 – year: 2020 ident: ref250 article-title: Event-based action recognition using timestamp image encoding network – start-page: 802 volume-title: Proc. 27th Int. Conf. Neural Informat. Process. Syst. ident: ref349 article-title: Convolutional LSTM network: A machine learning approach for precipitation nowcasting – ident: ref352 doi: 10.1109/TPAMI.2015.2505295 – ident: ref106 doi: 10.1109/CVPR.2019.00034 – ident: ref157 doi: 10.1109/TMM.2018.2802648 – ident: ref37 doi: 10.1016/j.cviu.2013.01.013 – ident: ref260 doi: 10.1109/IJCNN48605.2020.9206681 – ident: ref9 doi: 10.1109/CVPR.2016.167 – ident: ref245 doi: 10.1109/WACV48630.2021.00381 – ident: ref353 doi: 10.1109/CVPR.2015.7299172 – ident: ref408 doi: 10.1109/TPAMI.2019.2901464 – ident: ref86 doi: 10.5555/3045118.3045336 – ident: ref114 doi: 10.1016/j.patcog.2019.107037 – ident: ref204 doi: 10.1109/TPAMI.2022.3157033 – ident: ref107 doi: 10.1109/CVPR.2018.00813 – ident: ref32 doi: 10.1109/CVPRW.2019.00217 – ident: ref256 doi: 10.1109/JSSC.2007.914337 – ident: ref251 doi: 10.1109/ISCAS45731.2020.9181247 – ident: ref54 doi: 10.1016/j.imavis.2021.104108 – ident: ref275 doi: 10.1109/JSEN.2019.2911204 – ident: ref223 doi: 10.1007/s11042-022-14075-5 – ident: ref20 doi: 10.14569/IJACSA.2019.0100311 – ident: ref274 doi: 10.2991/cnci-19.2019.95 – ident: ref423 doi: 10.1109/CVPR.2016.495 – ident: ref109 doi: 10.1109/CVPR.2018.00675 – ident: ref238 doi: 10.1109/CVPR.2014.339 – ident: ref276 doi: 10.1142/9789812833709_0030 – ident: ref172 doi: 10.1109/CVPR.2017.486 – ident: ref411 doi: 10.1109/IROS45743.2020.9341160 – ident: ref161 doi: 10.1109/CVPR.2016.115 – year: 2019 ident: ref367 article-title: Self-supervised learning by cross-modal audio-video clustering – ident: ref213 doi: 10.1109/CVPR46437.2021.00471 – ident: ref141 doi: 10.1109/CVPR.2019.00033 – ident: ref300 doi: 10.1109/ICIP.2015.7350781 – ident: ref303 doi: 10.14722/ndss.2017.23023 – ident: ref372 doi: 10.1109/ICCV.2019.00756 – ident: ref158 doi: 10.1109/TIP.2017.2785279 – ident: ref112 doi: 10.1109/ICCV.2019.00630 – ident: ref308 doi: 10.1109/CVPR.2017.52 – ident: ref178 doi: 10.1016/j.patcog.2017.02.030 – ident: ref150 doi: 10.1109/CVPR.2019.00584 – ident: ref240 doi: 10.1109/CVPRW.2010.5543273 – ident: ref258 doi: 10.1109/ACPR.2017.136 – ident: ref396 doi: 10.1109/TCYB.2014.2347057 – ident: ref118 doi: 10.1109/CVPR42600.2020.00985 – ident: ref184 doi: 10.1109/TITS.2019.2935152 – ident: ref15 doi: 10.1109/LGRS.2015.2491329 – start-page: 1 volume-title: Proc. Int. Conf. Learn. Representations ident: ref87 article-title: Delving deeper into convolutional networks for learning video representations – ident: ref415 doi: 10.1109/CVPR46437.2021.01103 – ident: ref326 doi: 10.1016/j.matpr.2020.09.052 – ident: ref380 doi: 10.1109/CVPR.2015.7298698 – ident: ref241 doi: 10.1109/CVPR.2017.781 – ident: ref344 doi: 10.1109/TIFS.2020.2985628 – ident: ref19 doi: 10.1016/j.patrec.2013.02.006 – ident: ref203 doi: 10.1109/CVPR42600.2020.00026 – ident: ref116 doi: 10.1109/CVPR.2018.00155 – ident: ref164 doi: 10.1109/CVPR.2017.387 – ident: ref149 doi: 10.1109/CVPR46437.2021.00748 – ident: ref291 doi: 10.1109/TVT.2018.2878754 – ident: ref302 doi: 10.1109/ICCV.2019.00096 – ident: ref171 doi: 10.1109/LSP.2017.2678539 – ident: ref6 doi: 10.1016/j.imavis.2009.11.014 – ident: ref192 doi: 10.1007/978-3-030-58577-8_2 – ident: ref220 doi: 10.1145/2393347.2396382 – ident: ref348 doi: 10.1109/34.910878 – ident: ref248 doi: 10.1109/WACV51458.2022.00073 – ident: ref290 doi: 10.1109/CVPRW.2019.00056 – ident: ref39 doi: 10.1109/CVPR.2010.5540234 – ident: ref386 doi: 10.1109/CVPR.2009.5206557 – ident: ref60 doi: 10.1109/CVPR.2016.297 – ident: ref237 doi: 10.1109/CVPR.2012.6247813 – ident: ref3 doi: 10.1109/TIP.2006.891352 – ident: ref176 doi: 10.1109/SIBGRAPI.2019.00011 – ident: ref406 doi: 10.1109/ICCV.2019.00289 – start-page: 753 volume-title: Proc. Eur. Conf. Comput. Vis. ident: ref378 article-title: Scaling egocentric vision: The epic-kitchens dataset – ident: ref46 doi: 10.1109/TPAMI.2017.2769085 – ident: ref221 doi: 10.1109/THMS.2015.2504550 – ident: ref138 doi: 10.1007/978-3-642-15567-3_11 – ident: ref1 doi: 10.1109/ISCAS.2008.4542023 – ident: ref23 doi: 10.1109/SIBGRAPI.2018.00019 – ident: ref25 doi: 10.1109/ICCV.2019.00559 – ident: ref370 doi: 10.1109/ICCV.2019.00272 – year: 2018 ident: ref376 article-title: A short note about kinetics-600 |
SSID | ssj0014503 |
Score | 2.7378032 |
SecondaryResourceType | review_article |
Snippet | Human Action Recognition (HAR) aims to understand human behavior and assign a label to each action. It has a wide range of applications, and therefore has been... |
SourceID | proquest pubmed crossref ieee |
SourceType | Aggregation Database Index Database Enrichment Source Publisher |
StartPage | 3200 |
SubjectTerms | Acceleration Algorithms Computer vision data modality Deep learning Feature extraction Human action recognition Human Activities Human activity recognition Human motion Humans Machine learning multi-modality Optical imaging Pattern Recognition, Automated - methods Radar single modality Skeleton Teaching methods Three-dimensional displays Visualization |
Title | Human Action Recognition From Various Data Modalities: A Review |
URI | https://ieeexplore.ieee.org/document/9795869 https://www.ncbi.nlm.nih.gov/pubmed/35700242 https://www.proquest.com/docview/2773455306 https://www.proquest.com/docview/2676926739 |
Volume | 45 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1JS8QwFH6MHkQP7su4EcGbdmzTbeJFBnUYhRERFW8laZKLOhWdufjrfS9dUFHxUgJN0_Yt7ffyNoD91OdS2iDxrEQ24FdSeSpNjOe6PMTWkjeKoi2uksFddPkQP7TgsMmFMca44DPToaHz5esin9BW2ZFIRdxNxBRMoZiVuVqNxyCKXRdkRDCo4WhG1Akyvji6ve4NL9AU5LxDEowIYxZmQirsziP-5X_kGqz8jjXdP6e_AMP6actQk8fOZKw6-fu3Qo7_fZ1FmK_AJ-uV0rIELTNahoW6sQOr9HwZ5j5VKVyBE7fRz3ouA4Ld1BFHOO6_Fs_sHq3tYvLGzuRYsmGhCdij-X3Meqx0PKzCXf_89nTgVX0XvJzK0XtKy64OVDektFZtVBIqhJWSlDU1wkqTaj8Xkc1V1LWJ4UL5Oje-rxWCDa7jcA2mR8XIbABz5fODWCMMslFgYmGtRMSEuCdOVR7yNgQ19bO8KkpOvTGeMmec-CJzzMuIeVnFvDYcNNe8lCU5_py9QpRvZlZEb8N2zeSs0tq3jKdpGFEfpaQNe81p1DdyosiRQWJmnGKC8RDiEuulcDRr1zK1-fM9t2CWmtWXEWzbMD1-nZgdhDRjtetk-QMbK-yG |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lb9QwEB6VRSrlQF9QFtriSr1BtokTJ2suaAVdbWm3qtAW9WbZsX2BbtB298KvZ8Z5qEWAuESW4rzmEX_jeQEcFzHX2id55DWyAf-SJjJF7qLQ5UF4T94oira4zCfX2ecbcbMG77pcGOdcCD5zAxoGX76tyhVtlZ3IQophLh_BY1z3M1Fna3U-g0yEPsiIYVDH0ZBoU2RieTK7Gk3P0BjkfEAyjBhjA9ZTKu3OM_5gRQotVv6ONsOqM96Eafu-dbDJt8FqaQblz99KOf7vB23BswZ-slEtL9uw5uY7sNm2dmCNpu_A03t1CnfhQ9jqZ6OQA8G-tDFHOB4vqlv2Fe3tanXHPumlZtPKErRHA_w9G7Ha9fAcrsens4-TqOm8EJVUkD4yVg9tYoYpJbZaZ_LUILDUpK6Fk167wsalzHxpsqHPHZcmtqWLY2sQbnAr0hfQm1dz9xJYKKCfCItAyGeJE9J7jZgJkY8oTJnyPiQt9VXZlCWn7hjfVTBPYqkC8xQxTzXM68Pb7pofdVGOf87eJcp3Mxui92G_ZbJq9PZO8aJIM-qklPfhqDuNGkduFD13SEzFKSoYDyneYq8Wju7erUy9-vMz38CTyWx6oS7OLs9fwwa1rq_j2faht1ys3AECnKU5DHL9C6g879M |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Human+Action+Recognition+From+Various+Data+Modalities%3A+A+Review&rft.jtitle=IEEE+transactions+on+pattern+analysis+and+machine+intelligence&rft.au=Sun%2C+Zehua&rft.au=Ke%2C+Qiuhong&rft.au=Rahmani%2C+Hossein&rft.au=Bennamoun%2C+Mohammed&rft.date=2023-03-01&rft.issn=0162-8828&rft.eissn=2160-9292&rft.spage=1&rft.epage=20&rft_id=info:doi/10.1109%2FTPAMI.2022.3183112&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TPAMI_2022_3183112 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0162-8828&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0162-8828&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0162-8828&client=summon |