Robust Human Activity Recognition Using Multimodal Feature-Level Fusion
Automated recognition of human activities or actions has great significance as it incorporates wide-ranging applications, including surveillance, robotics, and personal health monitoring. Over the past few years, many computer vision-based methods have been developed for recognizing human actions fr...
Saved in:
Published in | IEEE access Vol. 7; pp. 60736 - 60751 |
---|---|
Main Authors | , , , , , , |
Format | Journal Article |
Language | English |
Published |
Piscataway
IEEE
2019
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Automated recognition of human activities or actions has great significance as it incorporates wide-ranging applications, including surveillance, robotics, and personal health monitoring. Over the past few years, many computer vision-based methods have been developed for recognizing human actions from RGB and depth camera videos. These methods include space-time trajectory, motion encoding, key poses extraction, space-time occupancy patterns, depth motion maps, and skeleton joints. However, these camera-based approaches are affected by background clutter and illumination changes and applicable to a limited field of view only. Wearable inertial sensors provide a viable solution to these challenges but are subject to several limitations such as location and orientation sensitivity. Due to the complementary trait of the data obtained from the camera and inertial sensors, the utilization of multiple sensing modalities for accurate recognition of human actions is gradually increasing. This paper presents a viable multimodal feature-level fusion approach for robust human action recognition, which utilizes data from multiple sensors, including RGB camera, depth sensor, and wearable inertial sensors. We extracted the computationally efficient features from the data obtained from RGB-D video camera and inertial body sensors. These features include densely extracted histogram of oriented gradient (HOG) features from RGB/depth videos and statistical signal attributes from wearable sensors data. The proposed human action recognition (HAR) framework is tested on a publicly available multimodal human action dataset UTD-MHAD consisting of 27 different human actions. K-nearest neighbor and support vector machine classifiers are used for training and testing the proposed fusion model for HAR. The experimental results indicate that the proposed scheme achieves better recognition results as compared to the state of the art. The feature-level fusion of RGB and inertial sensors provides the overall best performance for the proposed system, with an accuracy rate of 97.6%. |
---|---|
AbstractList | Automated recognition of human activities or actions has great significance as it incorporates wide-ranging applications, including surveillance, robotics, and personal health monitoring. Over the past few years, many computer vision-based methods have been developed for recognizing human actions from RGB and depth camera videos. These methods include space-time trajectory, motion encoding, key poses extraction, space-time occupancy patterns, depth motion maps, and skeleton joints. However, these camera-based approaches are affected by background clutter and illumination changes and applicable to a limited field of view only. Wearable inertial sensors provide a viable solution to these challenges but are subject to several limitations such as location and orientation sensitivity. Due to the complementary trait of the data obtained from the camera and inertial sensors, the utilization of multiple sensing modalities for accurate recognition of human actions is gradually increasing. This paper presents a viable multimodal feature-level fusion approach for robust human action recognition, which utilizes data from multiple sensors, including RGB camera, depth sensor, and wearable inertial sensors. We extracted the computationally efficient features from the data obtained from RGB-D video camera and inertial body sensors. These features include densely extracted histogram of oriented gradient (HOG) features from RGB/depth videos and statistical signal attributes from wearable sensors data. The proposed human action recognition (HAR) framework is tested on a publicly available multimodal human action dataset UTD-MHAD consisting of 27 different human actions. K-nearest neighbor and support vector machine classifiers are used for training and testing the proposed fusion model for HAR. The experimental results indicate that the proposed scheme achieves better recognition results as compared to the state of the art. The feature-level fusion of RGB and inertial sensors provides the overall best performance for the proposed system, with an accuracy rate of 97.6%. |
Author | Irtaza, Aun Malik, Hafiz M. A. Lee, Ik Hyun Javed, Ali Ehatisham-Ul-Haq, Muhammad Mahmood, Muhammad Tariq Azam, Muhammad Awais |
Author_xml | – sequence: 1 givenname: Muhammad surname: Ehatisham-Ul-Haq fullname: Ehatisham-Ul-Haq, Muhammad organization: Department of Computer Engineering, University of Engineering and Technology, Taxila, Pakistan – sequence: 2 givenname: Ali orcidid: 0000-0002-1290-1477 surname: Javed fullname: Javed, Ali organization: Department of Software Engineering, University of Engineering and Technology, Taxila, Pakistan – sequence: 3 givenname: Muhammad Awais orcidid: 0000-0003-0488-4598 surname: Azam fullname: Azam, Muhammad Awais organization: Department of Computer Engineering, University of Engineering and Technology, Taxila, Pakistan – sequence: 4 givenname: Hafiz M. A. orcidid: 0000-0001-6006-3888 surname: Malik fullname: Malik, Hafiz M. A. organization: Electrical and Computer Engineering Department, University of Michigan-Dearborn, Dearborn, MI, USA – sequence: 5 givenname: Aun surname: Irtaza fullname: Irtaza, Aun organization: Department of Computer Science, University of Engineering and Technology, Taxila, Pakistan – sequence: 6 givenname: Ik Hyun orcidid: 0000-0002-0605-7572 surname: Lee fullname: Lee, Ik Hyun organization: Department of Mechatronics, Korea Polytechnic University, Gyeonggi-do, South Korea – sequence: 7 givenname: Muhammad Tariq orcidid: 0000-0001-6814-3137 surname: Mahmood fullname: Mahmood, Muhammad Tariq email: tariq@koreatech.ac.kr organization: School of Computer Science and Information Engineering, Korea University of Technology and Education, Cheonan, South Korea |
BookMark | eNp9kVtLAzEQhYMoWC-_wJcFn7fmutk8llKtUBG8PIdsdrakbDc1yRb8925dFfHBecnMcL7DkHOGjjvfAUJXBE8JwepmNp8vnp-nFBM1pYowptgRmlBSqJwJVhz_6k_RZYwbPFQ5rIScoLsnX_UxZct-a7psZpPbu_SePYH1684l57vsNbpunT30bXJbX5s2uwWT-gD5CvYwTH0cVBfopDFthMuv9xy93i5e5st89Xh3P5-tcssoY7kkFWWVglo1uCgMMGUMtqWsRcMpp7IipSyhsFga1QiBZdlgCYIUXBhOKWXn6H70rb3Z6F1wWxPetTdOfy58WGsTkrMtaCCUYdmwRvGC19hWNVRC1MqWuObUVoPX9ei1C_6th5j0xvehG87XlAtRUKGUHFRqVNngYwzQaOuSOfxMCsa1mmB9iEGPMehDDPorhoFlf9jvi_-nrkbKAcAPUUpMOFXsA4U9lA4 |
CODEN | IAECCG |
CitedBy_id | crossref_primary_10_1109_ACCESS_2024_3481631 crossref_primary_10_1002_spe_3394 crossref_primary_10_1186_s40662_024_00405_1 crossref_primary_10_1002_cpe_7588 crossref_primary_10_1088_1755_1315_461_1_012075 crossref_primary_10_1007_s11042_020_09438_9 crossref_primary_10_1016_j_patcog_2020_107561 crossref_primary_10_1038_s41598_024_57912_3 crossref_primary_10_3390_app10186188 crossref_primary_10_3390_s24237695 crossref_primary_10_1109_JSEN_2020_3034614 crossref_primary_10_1007_s10115_024_02122_6 crossref_primary_10_2139_ssrn_4111254 crossref_primary_10_3390_s24247975 crossref_primary_10_3390_jimaging9070130 crossref_primary_10_3390_su13020970 crossref_primary_10_1109_ACCESS_2020_2976496 crossref_primary_10_1109_ACCESS_2024_3436910 crossref_primary_10_1109_ACCESS_2024_3479165 crossref_primary_10_1177_14604582231171927 crossref_primary_10_3390_s24113371 crossref_primary_10_1109_ACCESS_2019_2949269 crossref_primary_10_1109_JSEN_2021_3062261 crossref_primary_10_1007_s11042_023_16423_5 crossref_primary_10_1109_JSEN_2021_3102666 crossref_primary_10_3390_s22051939 crossref_primary_10_3390_life12081103 crossref_primary_10_1016_j_vrih_2022_07_008 crossref_primary_10_1109_ACCESS_2019_2954851 crossref_primary_10_1109_TIP_2021_3086590 crossref_primary_10_1109_JSEN_2020_3000498 crossref_primary_10_1109_LSENS_2023_3303081 crossref_primary_10_1109_ACCESS_2020_3025229 crossref_primary_10_1016_j_lmot_2023_101943 crossref_primary_10_1109_JSEN_2021_3108011 crossref_primary_10_1109_TCSVT_2023_3255832 crossref_primary_10_1177_0142331220984350 crossref_primary_10_3390_e22080817 crossref_primary_10_1109_JSEN_2024_3406727 crossref_primary_10_3390_app112411807 crossref_primary_10_3390_app13095567 crossref_primary_10_3390_s23094373 crossref_primary_10_1109_ACCESS_2024_3473828 crossref_primary_10_2478_amns_2023_2_00262 crossref_primary_10_3390_s23208609 crossref_primary_10_1007_s00521_023_09362_7 crossref_primary_10_1039_D1NR06680C crossref_primary_10_4018_IJIIT_296236 crossref_primary_10_1016_j_sna_2022_114004 crossref_primary_10_1080_10447318_2023_2267296 crossref_primary_10_3390_s22072489 crossref_primary_10_1142_S0129065723500028 crossref_primary_10_1016_j_sciaf_2023_e01796 crossref_primary_10_32604_iasc_2022_025421 crossref_primary_10_1016_j_bspc_2022_103762 crossref_primary_10_1016_j_cosrev_2023_100548 crossref_primary_10_1109_ACCESS_2020_3040758 crossref_primary_10_1016_j_patrec_2021_08_029 crossref_primary_10_1145_3699776 crossref_primary_10_1002_aisy_202100071 crossref_primary_10_1109_JSEN_2020_3028561 crossref_primary_10_1016_j_eswa_2022_119419 crossref_primary_10_1109_JBHI_2022_3168004 crossref_primary_10_1002_admt_202200549 crossref_primary_10_3390_electronics12010193 crossref_primary_10_1109_JSEN_2023_3337367 crossref_primary_10_1016_j_eswa_2024_123153 crossref_primary_10_1016_j_knosys_2023_110867 crossref_primary_10_1109_JSEN_2020_3022326 crossref_primary_10_1007_s42979_021_00484_0 crossref_primary_10_12677_CSA_2023_133052 crossref_primary_10_3390_mi14122204 crossref_primary_10_1109_LCOMM_2022_3145099 crossref_primary_10_1109_ACCESS_2020_2989267 crossref_primary_10_1109_JSEN_2023_3314728 crossref_primary_10_1109_TIM_2023_3240198 crossref_primary_10_2139_ssrn_4167818 crossref_primary_10_3390_app12063199 crossref_primary_10_1109_ACCESS_2024_3365138 crossref_primary_10_1109_ACCESS_2021_3130613 crossref_primary_10_1038_s41928_020_0422_z crossref_primary_10_1109_JSEN_2020_3006386 crossref_primary_10_1177_17298806221103708 crossref_primary_10_32604_cmc_2022_024422 crossref_primary_10_1016_j_dajour_2023_100327 crossref_primary_10_1016_j_inffus_2021_10_017 |
Cites_doi | 10.1016/j.imavis.2009.11.014 10.1109/CVPR.2005.177 10.1109/LSENS.2018.2878572 10.1167/16.3.33 10.1109/CVPRW.2010.5543273 10.1109/ACCESS.2018.2889797 10.12733/jics20150733 10.1007/978-3-319-09396-3_9 10.1109/TPAMI.2018.2868668 10.1016/j.patcog.2005.01.012 10.1007/11573425_12 10.1109/WACV.2015.150 10.1109/CVPR.2013.365 10.1109/IDT.2016.7843019 10.5244/C.22.99 10.1109/34.910878 10.1109/MFI.2017.8170441 10.1109/IJCNN.2017.7966210 10.1016/j.patcog.2017.10.033 10.1016/j.future.2017.11.029 10.1109/TSMC.2016.2562509 10.1109/THMS.2014.2325871 10.1016/j.cviu.2018.04.007 10.1145/1922649.1922653 10.1109/TSMC.2018.2850149 10.5244/C.23.124 10.1016/j.knosys.2018.05.029 10.1109/ICIP.2017.8296441 10.1109/ICCV.2005.66 10.1109/SPCOM.2012.6290032 10.1109/ICASSP.2016.7472170 10.1109/TSMC.2016.2639788 10.1145/2578726.2578744 10.1109/ICPR.2014.772 10.1007/s11263-005-1838-7 10.1109/THMS.2014.2362520 10.1007/978-3-642-33275-3_31 10.1109/ICIP.2015.7350781 10.3390/s140610146 10.1109/TGRS.2014.2381602 10.1109/TST.2014.6838194 10.1109/TSMC.2017.2660547 10.1049/iet-cvi.2016.0355 10.1109/JSEN.2018.2872862 10.1109/CVPRW.2012.6239232 10.1145/2393347.2396382 10.1109/TPAMI.2016.2565479 10.1109/TCSVT.2016.2628339 10.1007/s11042-018-5893-9 10.1109/CCNC.2013.6488584 10.1049/iet-cvi.2015.0416 10.1109/SURV.2012.110112.00192 10.1007/s10916-018-0948-z 10.1109/LSP.2017.2778190 10.1007/978-3-030-01234-2_21 10.1007/s00138-012-0450-4 10.1109/PERCOMW.2015.7134104 10.1109/CVPRW.2012.6239233 10.1016/j.cviu.2018.03.003 10.3390/s16040426 10.1016/j.jnca.2018.02.020 10.1117/12.853223 10.1016/j.cviu.2016.03.013 10.1109/TPAMI.2016.2558148 10.1109/CVPR.2008.4587756 10.1007/s11554-013-0370-1 10.1109/TSMCB.2012.2231959 10.1016/j.cviu.2010.10.002 10.1109/JSEN.2015.2487358 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019 |
DBID | 97E ESBDL RIA RIE AAYXX CITATION 7SC 7SP 7SR 8BQ 8FD JG9 JQ2 L7M L~C L~D DOA |
DOI | 10.1109/ACCESS.2019.2913393 |
DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005-present IEEE Open Access Journals IEEE All-Society Periodicals Package (ASPP) 1998-Present IEEE Electronic Library (IEL) CrossRef Computer and Information Systems Abstracts Electronics & Communications Abstracts Engineered Materials Abstracts METADEX Technology Research Database Materials Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional DOAJ |
DatabaseTitle | CrossRef Materials Research Database Engineered Materials Abstracts Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace METADEX Computer and Information Systems Abstracts Professional |
DatabaseTitleList | Materials Research Database |
Database_xml | – sequence: 1 dbid: DOA name: DOAJ Directory of Open Access Journals url: https://www.doaj.org/ sourceTypes: Open Website – sequence: 2 dbid: RIE name: IEEE Xplore url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering |
EISSN | 2169-3536 |
EndPage | 60751 |
ExternalDocumentID | oai_doaj_org_article_e12307f3f9464d0cbdeb55d9c80d42cb 10_1109_ACCESS_2019_2913393 8701429 |
Genre | orig-research |
GrantInformation_xml | – fundername: National Research Foundation of Korea funderid: 10.13039/501100003725 – fundername: National Research Foundation of Korea grantid: 2017R1D1A1B03033526; 2016R1D1A1B03933860 funderid: 10.13039/501100003725 – fundername: Ministry of Education grantid: NRF-2017R1A6A1A03015562 funderid: 10.13039/100010002 |
GroupedDBID | 0R~ 4.4 5VS 6IK 97E AAJGR ABAZT ABVLG ACGFS ADBBV AGSQL ALMA_UNASSIGNED_HOLDINGS BCNDV BEFXN BFFAM BGNUA BKEBE BPEOZ EBS EJD ESBDL GROUPED_DOAJ IPLJI JAVBF KQ8 M43 M~E O9- OCL OK1 RIA RIE RNS AAYXX CITATION RIG 7SC 7SP 7SR 8BQ 8FD JG9 JQ2 L7M L~C L~D |
ID | FETCH-LOGICAL-c3233-71b23b9ed9f066ae39aa0c87d5f42427b1878e6c07a9f55078f07e51645a42223 |
IEDL.DBID | DOA |
ISSN | 2169-3536 |
IngestDate | Wed Aug 27 01:28:15 EDT 2025 Sun Jun 29 16:24:15 EDT 2025 Thu Apr 24 22:50:56 EDT 2025 Tue Jul 01 02:41:31 EDT 2025 Wed Aug 27 02:46:18 EDT 2025 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/OAPA.html |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c3233-71b23b9ed9f066ae39aa0c87d5f42427b1878e6c07a9f55078f07e51645a42223 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ORCID | 0000-0002-1290-1477 0000-0001-6006-3888 0000-0002-0605-7572 0000-0001-6814-3137 0000-0003-0488-4598 |
OpenAccessLink | https://doaj.org/article/e12307f3f9464d0cbdeb55d9c80d42cb |
PQID | 2455625997 |
PQPubID | 4845423 |
PageCount | 16 |
ParticipantIDs | doaj_primary_oai_doaj_org_article_e12307f3f9464d0cbdeb55d9c80d42cb ieee_primary_8701429 crossref_primary_10_1109_ACCESS_2019_2913393 crossref_citationtrail_10_1109_ACCESS_2019_2913393 proquest_journals_2455625997 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 20190000 2019-00-00 20190101 2019-01-01 |
PublicationDateYYYYMMDD | 2019-01-01 |
PublicationDate_xml | – year: 2019 text: 20190000 |
PublicationDecade | 2010 |
PublicationPlace | Piscataway |
PublicationPlace_xml | – name: Piscataway |
PublicationTitle | IEEE access |
PublicationTitleAbbrev | Access |
PublicationYear | 2019 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | ref57 ref13 ref56 ref12 ref15 ref58 ref14 ref53 ref52 ref55 ref11 ref54 ref10 ref17 ref16 ref19 ref18 wu (ref59) 2018; 29 cheng (ref3) 2015; 17 ref51 ref50 ref46 ref45 ref48 ref47 ref42 ref41 ref44 ref43 li (ref6) 2014; 1 ref49 ref8 ref7 ref9 ref4 csurka (ref63) 2004 soomro (ref5) 2014; 71 ref40 ref35 ref34 ref37 ref36 ref31 ref30 ref33 ref32 ref2 ref1 ref39 ref38 ref71 ref70 ref73 ref72 ref68 ref24 ref67 ref23 ref26 ref69 ref25 ref64 ref20 ref66 ref22 ref65 ref21 ref28 ref27 ref29 ref60 ref62 ref61 |
References_xml | – ident: ref7 doi: 10.1016/j.imavis.2009.11.014 – ident: ref62 doi: 10.1109/CVPR.2005.177 – ident: ref60 doi: 10.1109/LSENS.2018.2878572 – ident: ref4 doi: 10.1167/16.3.33 – ident: ref33 doi: 10.1109/CVPRW.2010.5543273 – ident: ref61 doi: 10.1109/ACCESS.2018.2889797 – volume: 1 start-page: 498 year: 2014 ident: ref6 article-title: A crowdsourcing solution for road surface roughness detection using smartphones publication-title: Proceedings of the 20th International Technical Meeting of the Satellite Division of the Institute of Navigation (ION GNSS – ident: ref35 doi: 10.12733/jics20150733 – volume: 71 start-page: 181 year: 2014 ident: ref5 article-title: Action recognition in realistic sports videos publication-title: Advances in Computer Vision and Pattern Recognition doi: 10.1007/978-3-319-09396-3_9 – ident: ref19 doi: 10.1109/TPAMI.2018.2868668 – ident: ref69 doi: 10.1016/j.patcog.2005.01.012 – ident: ref10 doi: 10.1007/11573425_12 – ident: ref34 doi: 10.1109/WACV.2015.150 – ident: ref36 doi: 10.1109/CVPR.2013.365 – ident: ref8 doi: 10.1109/IDT.2016.7843019 – ident: ref15 doi: 10.5244/C.22.99 – ident: ref9 doi: 10.1109/34.910878 – ident: ref57 doi: 10.1109/MFI.2017.8170441 – ident: ref22 doi: 10.1109/IJCNN.2017.7966210 – ident: ref43 doi: 10.1016/j.patcog.2017.10.033 – ident: ref51 doi: 10.1016/j.future.2017.11.029 – ident: ref47 doi: 10.1109/TSMC.2016.2562509 – ident: ref23 doi: 10.1109/THMS.2014.2325871 – start-page: 59 year: 2004 ident: ref63 article-title: Visual categorization with bags of keypoints publication-title: Proc Int Workshop Stat Eur Conf Comput Vis (ECCV) – ident: ref45 doi: 10.1016/j.cviu.2018.04.007 – ident: ref25 doi: 10.1145/1922649.1922653 – ident: ref41 doi: 10.1109/TSMC.2018.2850149 – ident: ref67 doi: 10.5244/C.23.124 – ident: ref40 doi: 10.1016/j.knosys.2018.05.029 – ident: ref20 doi: 10.1109/ICIP.2017.8296441 – ident: ref66 doi: 10.1109/ICCV.2005.66 – ident: ref28 doi: 10.1109/SPCOM.2012.6290032 – ident: ref56 doi: 10.1109/ICASSP.2016.7472170 – ident: ref2 doi: 10.1109/TSMC.2016.2639788 – ident: ref68 doi: 10.1145/2578726.2578744 – ident: ref38 doi: 10.1109/ICPR.2014.772 – ident: ref16 doi: 10.1007/s11263-005-1838-7 – ident: ref55 doi: 10.1109/THMS.2014.2362520 – volume: 17 start-page: 1 year: 2015 ident: ref3 article-title: Advances in human action recognition: A survey publication-title: New J Phys – volume: 29 start-page: 82 year: 2018 ident: ref59 article-title: Autoencoder-based feature learning from a 2D depth map and 3D skeleton for action recognition publication-title: J Comput – ident: ref30 doi: 10.1007/978-3-642-33275-3_31 – ident: ref53 doi: 10.1109/ICIP.2015.7350781 – ident: ref70 doi: 10.3390/s140610146 – ident: ref73 doi: 10.1109/TGRS.2014.2381602 – ident: ref72 doi: 10.1109/TST.2014.6838194 – ident: ref1 doi: 10.1109/TSMC.2017.2660547 – ident: ref26 doi: 10.1049/iet-cvi.2016.0355 – ident: ref58 doi: 10.1109/JSEN.2018.2872862 – ident: ref31 doi: 10.1109/CVPRW.2012.6239232 – ident: ref29 doi: 10.1145/2393347.2396382 – ident: ref37 doi: 10.1109/TPAMI.2016.2565479 – ident: ref39 doi: 10.1109/TCSVT.2016.2628339 – ident: ref21 doi: 10.1007/s11042-018-5893-9 – ident: ref71 doi: 10.1109/CCNC.2013.6488584 – ident: ref17 doi: 10.1049/iet-cvi.2015.0416 – ident: ref48 doi: 10.1109/SURV.2012.110112.00192 – ident: ref52 doi: 10.1007/s10916-018-0948-z – ident: ref18 doi: 10.1109/LSP.2017.2778190 – ident: ref44 doi: 10.1007/978-3-030-01234-2_21 – ident: ref65 doi: 10.1007/s00138-012-0450-4 – ident: ref50 doi: 10.1109/PERCOMW.2015.7134104 – ident: ref27 doi: 10.1109/CVPRW.2012.6239233 – ident: ref42 doi: 10.1016/j.cviu.2018.03.003 – ident: ref49 doi: 10.3390/s16040426 – ident: ref46 doi: 10.1016/j.jnca.2018.02.020 – ident: ref11 doi: 10.1117/12.853223 – ident: ref12 doi: 10.1016/j.cviu.2016.03.013 – ident: ref13 doi: 10.1109/TPAMI.2016.2558148 – ident: ref64 doi: 10.1109/CVPR.2008.4587756 – ident: ref32 doi: 10.1007/s11554-013-0370-1 – ident: ref14 doi: 10.1109/TSMCB.2012.2231959 – ident: ref24 doi: 10.1016/j.cviu.2010.10.002 – ident: ref54 doi: 10.1109/JSEN.2015.2487358 |
SSID | ssj0000816957 |
Score | 2.4976645 |
Snippet | Automated recognition of human activities or actions has great significance as it incorporates wide-ranging applications, including surveillance, robotics, and... |
SourceID | doaj proquest crossref ieee |
SourceType | Open Website Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 60736 |
SubjectTerms | Cameras Clutter Computer vision Dense HOG depth sensor Feature extraction Feature recognition feature-level fusion Field of view Histograms human action recognition Human activity recognition Inertial sensing devices inertial sensor Moving object recognition Occupancy RGB camera Robotics Robustness Sensor fusion Sensor phenomena and characterization Sensors Spacetime Support vector machines Three-dimensional displays Video Wearable sensors Wearable technology |
SummonAdditionalLinks | – databaseName: IEEE Electronic Library (IEL) dbid: RIE link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Nb9QwEB1te4IDUApiaaly4Nhsk9iO7eOy6nZVUQ4VlXqz_HmhbKrt5sKvx-N4o1IqxC2KYsvJG9szk-c3AJ-dDsYFqkupdVVSaUMpgqSlFlZH78G2deLmXH1rVzf08pbdTuB0PAvjvU_kMz_Dy_Qv33W2x1TZWbStOq6fe7AXA7fhrNaYT8ECEpLxLCxUV_JsvljEd0D2lpw1MsZikvyx-SSN_lxU5a-VOG0vy9dwtRvYwCr5Meu3ZmZ_PdFs_N-Rv4FX2c8s5oNhHMDEr9_Cy0fqg4dwcd2Z_mFbpDx-MbdDHYniekcp6tZFIhQU6ZDuz87F_tBj7De-_Ipco2LZY67tHdwsz78vVmWuq1Ba0hBS8to0xEjvZIgOh_YEcbKCOxZo3LG5qQUXvrUV1zKg3pkIFfcsBlZMY8aIvIf9dbf2H6BA9R3iY1AkAqeiFZIa0lgtWKt5Wzk2hWb3wZXNouNY--JOpeCjkmpASSFKKqM0hdOx0f2gufHvx78gkuOjKJidbkQEVJ5_ytfIeA8k2mFLXWWN84YxJ62oHG2smcIhojZ2kgGbwvHOLlSe3A-qoSyFjZJ_fL7VEbzAAQ6ZmmPY3256_yn6Lltzkoz2N7_P6c8 priority: 102 providerName: IEEE |
Title | Robust Human Activity Recognition Using Multimodal Feature-Level Fusion |
URI | https://ieeexplore.ieee.org/document/8701429 https://www.proquest.com/docview/2455625997 https://doaj.org/article/e12307f3f9464d0cbdeb55d9c80d42cb |
Volume | 7 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV07T8MwELZQJxgQUBCFgjIwEprEdmyPpaJUCBgqKnWz_JwgQX38f3xOWhUhwcIaOU58Pt9Ln79D6MYqr60nKhVKZSkRxqfcC5IqblSIHkyZR2zOy2s5mZGnOZ3vtPoCTFhDD9wIbuBygCp7HCYoic2Mtk5TaoXhmSWF0WB9g8_bSaaiDeZ5KShraYbyTAyGo1FYEWC5xF0hQmYm8DdXFBn72xYrP-xydDbjI3TYRonJsPm7Y7TnqhN0sMMd2EWP01qvl6skVuGToWm6QCTTDSCorpIIB0jiFduP2ob5IN5bL1z6DEihZLyGStkpmo0f3kaTtO2KkBpcYJyyXBdYC2eFD-GCchikbDiz1JPgb5nOOeOuNBlTwgNbGfcZczSkRVRBvQefoU5VV-4cJcCdg11IabhnhJdcEI0LozgtFSszS3uo2AhImpYyHDpXvMuYOmRCNlKVIFXZSrWHbrcvfTaMGb8PvwfJb4cC3XV8EJRAtkog_1KCHurCvm0nCUYoD462h_qbfZTt0VzKgtCY9Al28R-fvkT7sJymKtNHndVi7a5CnLLS11Elr-OVwi_li-Ii |
linkProvider | Directory of Open Access Journals |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Nb9QwEB2VcgAO5aMglhbIgWOzTWI7to_bFcsCuz1UrdSb5c8LsEHt5sKvx-N4IygIcYui2HLyxvbM5PkNwDung3GB6lJqXZVU2lCKIGmphdXRe7Btnbg56_N2eUU_XbPrPTgZz8J47xP5zE_xMv3Ld53tMVV2Gm2rjuvnPbgf931WD6e1xowKlpCQjGdpobqSp7P5PL4F8rfktJExGpPkt-0nqfTnsip_rMVpg1k8hvVuaAOv5Mu035qp_XFHtfF_x_4EDrKnWcwG03gKe37zDB79oj94CB8uOtPfbouUyS9mdqgkUVzsSEXdpkiUgiId0_3Wudgf-oz9jS9XyDYqFj1m257D1eL95XxZ5soKpSUNISWvTUOM9E6G6HJoTxApK7hjgcY9m5tacOFbW3EtAyqeiVBxz2JoxTTmjMgL2N90G_8SCtTfIT6GRSJwKlohqSGN1YK1mreVYxNodh9c2Sw7jtUvvqoUflRSDSgpRElllCZwMjb6Pqhu_PvxM0RyfBQls9ONiIDKM1D5GjnvgURLbKmrrHHeMOakFZWjjTUTOETUxk4yYBM43tmFytP7VjXR_DBwlPzV31u9hQfLy_VKrT6efz6ChzjYIW9zDPvbm96_jp7M1rxJBvwTGmftGA |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Robust+Human+Activity+Recognition+Using+Multimodal+Feature-Level+Fusion&rft.jtitle=IEEE+access&rft.au=Ehatisham-Ul-Haq%2C+Muhammad&rft.au=Javed%2C+Ali&rft.au=Azam%2C+Muhammad+Awais&rft.au=Malik%2C+Hafiz+M.+A.&rft.date=2019&rft.issn=2169-3536&rft.eissn=2169-3536&rft.volume=7&rft.spage=60736&rft.epage=60751&rft_id=info:doi/10.1109%2FACCESS.2019.2913393&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_ACCESS_2019_2913393 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2169-3536&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2169-3536&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2169-3536&client=summon |