A Multimodal Data Processing System for LiDAR-Based Human Activity Recognition
Increasingly, the task of detecting and recognizing the actions of a human has been delegated to some form of neural network processing camera or wearable sensor data. Due to the degree to which the camera can be affected by lighting and wearable sensors scantiness, neither one modality can capture...
Saved in:
Published in | IEEE transactions on cybernetics Vol. 52; no. 10; pp. 10027 - 10040 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
Piscataway
IEEE
01.10.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Increasingly, the task of detecting and recognizing the actions of a human has been delegated to some form of neural network processing camera or wearable sensor data. Due to the degree to which the camera can be affected by lighting and wearable sensors scantiness, neither one modality can capture the required data to perform the task confidently. That being the case, range sensors, like light detection and ranging (LiDAR), can complement the process to perceive the environment more robustly. Most recently, researchers have been exploring ways to apply convolutional neural networks to 3-D data. These methods typically rely on a single modality and cannot draw on information from complementing sensor streams to improve accuracy. This article proposes a framework to tackle human activity recognition by leveraging the benefits of sensor fusion and multimodal machine learning. Given both RGB and point cloud data, our method describes the activities being performed by subjects using regions with a convolutional neural network (R-CNN) and a 3-D modified Fisher vector network. Evaluated on a custom captured multimodal dataset demonstrates that the model outputs remarkably accurate human activity classification (90%). Furthermore, this framework can be used for sports analytics, understanding social behavior, surveillance, and perhaps most notably by autonomous vehicles (AVs) to data-driven decision-making policies in urban areas and indoor environments. |
---|---|
AbstractList | Increasingly, the task of detecting and recognizing the actions of a human has been delegated to some form of neural network processing camera or wearable sensor data. Due to the degree to which the camera can be affected by lighting and wearable sensors scantiness, neither one modality can capture the required data to perform the task confidently. That being the case, range sensors, like light detection and ranging (LiDAR), can complement the process to perceive the environment more robustly. Most recently, researchers have been exploring ways to apply convolutional neural networks to 3-D data. These methods typically rely on a single modality and cannot draw on information from complementing sensor streams to improve accuracy. This article proposes a framework to tackle human activity recognition by leveraging the benefits of sensor fusion and multimodal machine learning. Given both RGB and point cloud data, our method describes the activities being performed by subjects using regions with a convolutional neural network (R-CNN) and a 3-D modified Fisher vector network. Evaluated on a custom captured multimodal dataset demonstrates that the model outputs remarkably accurate human activity classification (90%). Furthermore, this framework can be used for sports analytics, understanding social behavior, surveillance, and perhaps most notably by autonomous vehicles (AVs) to data-driven decision-making policies in urban areas and indoor environments.Increasingly, the task of detecting and recognizing the actions of a human has been delegated to some form of neural network processing camera or wearable sensor data. Due to the degree to which the camera can be affected by lighting and wearable sensors scantiness, neither one modality can capture the required data to perform the task confidently. That being the case, range sensors, like light detection and ranging (LiDAR), can complement the process to perceive the environment more robustly. Most recently, researchers have been exploring ways to apply convolutional neural networks to 3-D data. These methods typically rely on a single modality and cannot draw on information from complementing sensor streams to improve accuracy. This article proposes a framework to tackle human activity recognition by leveraging the benefits of sensor fusion and multimodal machine learning. Given both RGB and point cloud data, our method describes the activities being performed by subjects using regions with a convolutional neural network (R-CNN) and a 3-D modified Fisher vector network. Evaluated on a custom captured multimodal dataset demonstrates that the model outputs remarkably accurate human activity classification (90%). Furthermore, this framework can be used for sports analytics, understanding social behavior, surveillance, and perhaps most notably by autonomous vehicles (AVs) to data-driven decision-making policies in urban areas and indoor environments. Increasingly, the task of detecting and recognizing the actions of a human has been delegated to some form of neural network processing camera or wearable sensor data. Due to the degree to which the camera can be affected by lighting and wearable sensors scantiness, neither one modality can capture the required data to perform the task confidently. That being the case, range sensors, like light detection and ranging (LiDAR), can complement the process to perceive the environment more robustly. Most recently, researchers have been exploring ways to apply convolutional neural networks to 3-D data. These methods typically rely on a single modality and cannot draw on information from complementing sensor streams to improve accuracy. This article proposes a framework to tackle human activity recognition by leveraging the benefits of sensor fusion and multimodal machine learning. Given both RGB and point cloud data, our method describes the activities being performed by subjects using regions with a convolutional neural network (R-CNN) and a 3-D modified Fisher vector network. Evaluated on a custom captured multimodal dataset demonstrates that the model outputs remarkably accurate human activity classification (90%). Furthermore, this framework can be used for sports analytics, understanding social behavior, surveillance, and perhaps most notably by autonomous vehicles (AVs) to data-driven decision-making policies in urban areas and indoor environments. |
Author | Kondoz, Ahmet De-Silva, Varuna Moencks, Mirco Roche, Jamie Hook, Joosep |
Author_xml | – sequence: 1 givenname: Jamie orcidid: 0000-0002-5449-3774 surname: Roche fullname: Roche, Jamie email: a.j.roche@lboro.ac.uk organization: Institute for Digital Technologies, Loughborough University London, London, U.K – sequence: 2 givenname: Varuna surname: De-Silva fullname: De-Silva, Varuna organization: Institute for Digital Technologies, Loughborough University London, London, U.K – sequence: 3 givenname: Joosep surname: Hook fullname: Hook, Joosep organization: Institute for Digital Technologies, Loughborough University London, London, U.K – sequence: 4 givenname: Mirco orcidid: 0000-0003-1108-6455 surname: Moencks fullname: Moencks, Mirco organization: Institute for Manufacturing, University of Cambridge, Cambridge, U.K – sequence: 5 givenname: Ahmet surname: Kondoz fullname: Kondoz, Ahmet organization: Institute for Digital Technologies, Loughborough University London, London, U.K |
BookMark | eNp9kMtr3DAQh0VJaR7NH1ByEeTSi7d6WzpudvOC7YM0PfQkZHkcFGwrkeTC_vfxsiGHHKrLiOH7zQzfMToY4wgIfaFkQSkx3-5Xfy8WjDC64ERLoc0HdMSo0hVjtTx4-6v6EJ3m_Ejmp-eW0Z_QIRdUKUbNEfqxxN-nvoQhtq7Ha1cc_pWih5zD-IB_b3OBAXcx4U1YL--qC5ehxTfT4Ea89CX8C2WL78DHhzGUEMfP6GPn-gynr_UE_bm6vF_dVJuf17er5abynKlSARe-la2WndSUCd5SRTnzQInjppNcSNWwRkmpPGmajnjgNeuAcN1Q0zaOn6Cv-7lPKT5PkIsdQvbQ926EOGXLpJDSGKHFjJ6_Qx_jlMb5OstqqrjQhJOZonvKp5hzgs4-pTC4tLWU2J1vu_Ntd77tq-85U7_L-FDcTkNJLvT_TZ7tkwEA3jYZoQSnnL8A0fyLTw |
CODEN | ITCEB8 |
CitedBy_id | crossref_primary_10_32604_cmc_2024_052771 crossref_primary_10_5916_jamet_2023_47_2_84 crossref_primary_10_3390_s24020626 crossref_primary_10_1109_TCYB_2023_3300832 crossref_primary_10_1109_JSEN_2023_3310620 crossref_primary_10_1080_21642583_2025_2467083 crossref_primary_10_3390_electronics13071300 crossref_primary_10_3390_s23177357 crossref_primary_10_1109_LSENS_2023_3303081 crossref_primary_10_1016_j_patcog_2024_110811 crossref_primary_10_1109_JIOT_2023_3344179 crossref_primary_10_3390_rs16111968 crossref_primary_10_1016_j_aej_2024_12_097 crossref_primary_10_3390_jimaging11030091 crossref_primary_10_3758_s13428_022_01882_9 crossref_primary_10_1109_JIOT_2021_3127186 crossref_primary_10_3390_app142210764 crossref_primary_10_1109_LSENS_2023_3316882 crossref_primary_10_1038_s41598_022_24754_w crossref_primary_10_1109_JSEN_2023_3300357 crossref_primary_10_1109_JSEN_2023_3344789 crossref_primary_10_3389_fnbot_2023_1121623 crossref_primary_10_1109_ACCESS_2024_3474100 crossref_primary_10_1109_JSEN_2024_3482291 |
Cites_doi | 10.1109/TCE.2010.5681163 10.1109/CVPR.2016.609 10.1109/TPAMI.2016.2577031 10.1109/TITS.2015.2489261 10.1109/TPAMI.2018.2798607 10.1109/TITB.2005.856864 10.1109/ICORR.2015.7281257 10.1109/ICCV.2017.233 10.1016/j.imavis.2005.09.024 10.1016/j.procs.2016.09.126 10.23919/FUSION45008.2020.9190246 10.1109/CVPR.2018.00102 10.1109/TITS.2018.2836305 10.1145/2809695.2809718 10.1109/TPAMI.2016.2640292 10.1109/URAI.2011.6145923 10.3390/s101211322 10.3390/s151229858 10.1109/TCYB.2019.2960481 10.1145/3343031.3351170 10.1109/ITSC.2014.6958057 10.1017/S0305004100016297 10.1109/ICSPIS.2016.7869899 10.1016/j.patrec.2018.02.010 10.1109/CVPR.2015.7298594 10.1016/j.patrec.2008.08.002 10.1109/TIP.2019.2937724 10.1109/CVPR.2018.00472 10.1145/3017680.3022460 10.3390/s141222500 10.1016/j.eswa.2018.03.056 10.1109/LRA.2018.2850061 10.1109/BSN.2010.23 10.1109/TPAMI.2017.2771306 10.1109/IROS.2014.6943155 10.1109/IVS.2018.8500387 10.1109/CVPR.2018.00558 10.1017/9781108635592 10.1109/TII.2017.2712746 10.3390/s18082730 10.1016/j.medengphy.2015.06.009 10.1109/CVPR.2016.90 10.1109/TBME.2014.2307069 10.1109/TITS.2016.2601655 10.1155/2016/4351435 10.1109/TITS.2017.2686871 10.18653/v1/P17-5002 10.1097/01.anes.0000296537.62905.25 10.1109/TIP.2019.2937757 10.1136/bmj.1.5694.474 10.1109/CVPR42600.2020.00119 10.1109/JIOT.2020.2984544 10.1038/ajh.2009.45 10.1007/s11042-020-08747-3 10.1007/s42154-019-00083-z 10.1007/s10489-017-0976-2 10.1007/978-3-642-39314-3_1 10.4324/9781410605337-29 10.1109/TCYB.2020.2987575 10.1109/IndiaCom.2014.6828039 10.1109/CVPR.2017.365 10.1016/j.eswa.2014.04.037 10.1109/JSEN.2014.2331463 10.1007/s11263-009-0275-4 10.1109/ITSC.2015.174 10.1109/35.41402 10.3390/s17030529 10.5555/3295222.3295263 10.1007/s00500-012-0896-3 10.1109/TCYB.2019.2904901 10.1109/ICAR.2015.7251474 10.1109/TCYB.2020.2974688 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022 |
DBID | 97E RIA RIE AAYXX CITATION 7SC 7SP 7TB 8FD F28 FR3 H8D JQ2 L7M L~C L~D 7X8 |
DOI | 10.1109/TCYB.2021.3085489 |
DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef Computer and Information Systems Abstracts Electronics & Communications Abstracts Mechanical & Transportation Engineering Abstracts Technology Research Database ANTE: Abstracts in New Technology & Engineering Engineering Research Database Aerospace Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional MEDLINE - Academic |
DatabaseTitle | CrossRef Aerospace Database Technology Research Database Computer and Information Systems Abstracts – Academic Mechanical & Transportation Engineering Abstracts Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Engineering Research Database Advanced Technologies Database with Aerospace ANTE: Abstracts in New Technology & Engineering Computer and Information Systems Abstracts Professional MEDLINE - Academic |
DatabaseTitleList | MEDLINE - Academic Aerospace Database |
Database_xml | – sequence: 1 dbid: RIE name: IEEE Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Sciences (General) |
EISSN | 2168-2275 |
EndPage | 10040 |
ExternalDocumentID | 10_1109_TCYB_2021_3085489 9464313 |
Genre | orig-research |
GroupedDBID | 0R~ 4.4 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACIWK AENEX AGQYO AGSQL AHBIQ AKJIK AKQYR ALMA_UNASSIGNED_HOLDINGS ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ EBS EJD HZ~ IFIPE IPLJI JAVBF M43 O9- OCL PQQKQ RIA RIE RNS AAYXX CITATION RIG 7SC 7SP 7TB 8FD F28 FR3 H8D JQ2 L7M L~C L~D 7X8 |
ID | FETCH-LOGICAL-c326t-e34cd5d85f581243d16132ce10a39f53456b2b6556c0bbf0ce372fe038b19dba3 |
IEDL.DBID | RIE |
ISSN | 2168-2267 2168-2275 |
IngestDate | Fri Jul 11 00:32:37 EDT 2025 Sun Jun 29 12:51:53 EDT 2025 Thu Apr 24 22:55:43 EDT 2025 Tue Jul 01 00:53:59 EDT 2025 Wed Aug 27 02:15:10 EDT 2025 |
IsDoiOpenAccess | false |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 10 |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c326t-e34cd5d85f581243d16132ce10a39f53456b2b6556c0bbf0ce372fe038b19dba3 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
ORCID | 0000-0002-5449-3774 0000-0003-1108-6455 |
PMID | 34166219 |
PQID | 2716348030 |
PQPubID | 85422 |
PageCount | 14 |
ParticipantIDs | crossref_primary_10_1109_TCYB_2021_3085489 proquest_miscellaneous_2545599484 proquest_journals_2716348030 crossref_citationtrail_10_1109_TCYB_2021_3085489 ieee_primary_9464313 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2022-10-01 |
PublicationDateYYYYMMDD | 2022-10-01 |
PublicationDate_xml | – month: 10 year: 2022 text: 2022-10-01 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | Piscataway |
PublicationPlace_xml | – name: Piscataway |
PublicationTitle | IEEE transactions on cybernetics |
PublicationTitleAbbrev | TCYB |
PublicationYear | 2022 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | ref13 ref57 ref12 ref56 ref59 ref14 ref58 ref52 ref11 ref10 ref16 ref19 ref18 ref51 Suganuma (ref50) ref46 ref45 ref48 ref47 ref42 ref41 ref44 ref43 James (ref65) ref8 ref7 ref9 ref4 ref6 Saracco (ref55) 2020 ref5 ref82 ref81 ref40 ref83 Krizhevsky (ref64) 2012 ref80 ref35 (ref53) 2006 ref79 ref34 ref78 Le (ref17) ref37 ref36 ref31 ref75 ref30 ref74 ref33 ref77 ref32 ref76 ref2 ref1 ref39 ref38 ref71 ref70 ref73 ref72 Such (ref15) 2017 Saracco (ref54) 2020 ref24 ref68 ref23 ref67 Moencks (ref3) 2019 ref26 ref25 ref69 ref20 ref22 ref66 ref21 Byun (ref49) ref28 ref27 Qi (ref63) ref29 ref60 ref62 ref61 |
References_xml | – ident: ref16 doi: 10.1109/TCE.2010.5681163 – ident: ref59 doi: 10.1109/CVPR.2016.609 – volume-title: LIDAR Hits Mass Market year: 2020 ident: ref54 – ident: ref6 doi: 10.1109/TPAMI.2016.2577031 – ident: ref51 doi: 10.1109/TITS.2015.2489261 – ident: ref14 doi: 10.1109/TPAMI.2018.2798607 – ident: ref34 doi: 10.1109/TITB.2005.856864 – ident: ref38 doi: 10.1109/ICORR.2015.7281257 – ident: ref75 doi: 10.1109/ICCV.2017.233 – ident: ref2 doi: 10.1016/j.imavis.2005.09.024 – ident: ref42 doi: 10.1016/j.procs.2016.09.126 – ident: ref21 doi: 10.23919/FUSION45008.2020.9190246 – ident: ref80 doi: 10.1109/CVPR.2018.00102 – ident: ref4 doi: 10.1109/TITS.2018.2836305 – ident: ref32 doi: 10.1145/2809695.2809718 – ident: ref73 doi: 10.1109/TPAMI.2016.2640292 – volume-title: Deep neuroevolution: Genetic algorithms are a competitive alternative for training deep neural networks for reinforcement learning year: 2017 ident: ref15 – ident: ref43 doi: 10.1109/URAI.2011.6145923 – ident: ref52 doi: 10.3390/s101211322 – ident: ref33 doi: 10.3390/s151229858 – ident: ref48 doi: 10.1109/TCYB.2019.2960481 – ident: ref76 doi: 10.1145/3343031.3351170 – volume-title: Light Field Technology—The Future of Photography? year: 2020 ident: ref55 – ident: ref24 doi: 10.1109/ITSC.2014.6958057 – ident: ref66 doi: 10.1017/S0305004100016297 – volume-title: LiDAR—Overview of Technology, Applications, Market Features & Industry year: 2006 ident: ref53 – ident: ref18 doi: 10.1109/ICSPIS.2016.7869899 – ident: ref57 doi: 10.1016/j.patrec.2018.02.010 – ident: ref67 doi: 10.1109/CVPR.2015.7298594 – ident: ref36 doi: 10.1016/j.patrec.2008.08.002 – ident: ref78 doi: 10.1109/TIP.2019.2937724 – ident: ref81 doi: 10.1109/CVPR.2018.00472 – ident: ref82 doi: 10.1145/3017680.3022460 – ident: ref26 doi: 10.3390/s141222500 – ident: ref58 doi: 10.1016/j.eswa.2018.03.056 – ident: ref7 doi: 10.1109/LRA.2018.2850061 – ident: ref35 doi: 10.1109/BSN.2010.23 – ident: ref72 doi: 10.1109/TPAMI.2017.2771306 – ident: ref83 doi: 10.1109/IROS.2014.6943155 – ident: ref60 doi: 10.1109/IVS.2018.8500387 – ident: ref74 doi: 10.1109/CVPR.2018.00558 – ident: ref71 doi: 10.1017/9781108635592 – ident: ref27 doi: 10.1109/TII.2017.2712746 – ident: ref5 doi: 10.3390/s18082730 – ident: ref31 doi: 10.1016/j.medengphy.2015.06.009 – ident: ref44 doi: 10.1109/CVPR.2016.90 – ident: ref25 doi: 10.1109/TBME.2014.2307069 – ident: ref12 doi: 10.1109/TITS.2016.2601655 – ident: ref46 doi: 10.1155/2016/4351435 – ident: ref11 doi: 10.1109/TITS.2017.2686871 – start-page: 215 volume-title: Proc. SICE Annu. Conf. (SICE) ident: ref50 article-title: Development of an autonomous vehicle—System overview of testride vehicle in the Tokyo motor show 2011 – volume-title: Proc. Workshop Plan. Percept. Navig. Intell. Veh. ident: ref49 article-title: ESTRO: Design and development of intelligent autonomous vehicle for shuttle service in the ETRI – start-page: 4 volume-title: Proc. Conf. Assoc. Comput. Mech. Eng. ident: ref65 article-title: Point cloud data from Photogrammetry techniques to generate 3D Geometry – start-page: 1 volume-title: Proc. MIT Sloan Sports Anal. Conf. ident: ref17 article-title: Data-driven ghosting using deep imitation learning – ident: ref13 doi: 10.18653/v1/P17-5002 – ident: ref39 doi: 10.1097/01.anes.0000296537.62905.25 – ident: ref77 doi: 10.1109/TIP.2019.2937757 – ident: ref41 doi: 10.1136/bmj.1.5694.474 – ident: ref79 doi: 10.1109/CVPR42600.2020.00119 – ident: ref56 doi: 10.1109/JIOT.2020.2984544 – ident: ref40 doi: 10.1038/ajh.2009.45 – ident: ref45 doi: 10.1007/s11042-020-08747-3 – ident: ref61 doi: 10.1007/s42154-019-00083-z – ident: ref19 doi: 10.1007/s10489-017-0976-2 – ident: ref70 doi: 10.1007/978-3-642-39314-3_1 – ident: ref68 doi: 10.4324/9781410605337-29 – ident: ref9 doi: 10.1109/TCYB.2020.2987575 – ident: ref22 doi: 10.1109/IndiaCom.2014.6828039 – ident: ref1 doi: 10.1109/CVPR.2017.365 – ident: ref29 doi: 10.1016/j.eswa.2014.04.037 – ident: ref28 doi: 10.1109/JSEN.2014.2331463 – volume-title: Advances in Neural Information Processing Systems 25 year: 2012 ident: ref64 article-title: ImageNet – ident: ref69 doi: 10.1007/s11263-009-0275-4 – ident: ref23 doi: 10.1109/ITSC.2015.174 – ident: ref8 doi: 10.1109/35.41402 – ident: ref30 doi: 10.3390/s17030529 – ident: ref62 doi: 10.5555/3295222.3295263 – ident: ref20 doi: 10.1007/s00500-012-0896-3 – ident: ref47 doi: 10.1109/TCYB.2019.2904901 – volume-title: Adaptive feature processing for robust human activity recognition on a novel multi-modal dataset year: 2019 ident: ref3 – ident: ref37 doi: 10.1109/ICAR.2015.7251474 – ident: ref10 doi: 10.1109/TCYB.2020.2974688 – start-page: 77 volume-title: Proc. 30th IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR) ident: ref63 article-title: PointNet: Deep learning on point sets for 3D classification and segmentation |
SSID | ssj0000816898 |
Score | 2.4665036 |
Snippet | Increasingly, the task of detecting and recognizing the actions of a human has been delegated to some form of neural network processing camera or wearable... |
SourceID | proquest crossref ieee |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 10027 |
SubjectTerms | Activity recognition Artificial neural networks Cameras Convolutional neural network Data processing Decision making faster RCNN Fisher vector Human activity recognition human activity recognition (HAR) Indoor environments Laser radar Lidar Machine learning Micromechanical devices multimodal machine learning (ML) Multisensor fusion Neural networks Sensors Three-dimensional displays Urban areas Wearable sensors Wearable technology |
Title | A Multimodal Data Processing System for LiDAR-Based Human Activity Recognition |
URI | https://ieeexplore.ieee.org/document/9464313 https://www.proquest.com/docview/2716348030 https://www.proquest.com/docview/2545599484 |
Volume | 52 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1JT9wwFH6inHqhrOqURUbiUBAZHC9JfByW0agqHBBIcIriJRJqmyDIXPrr--x4IgSo4hbJdpT4-fl99ls-gANjXYV2v05yTwQoGJWJqiubSJvXNlfG8XChf3mVzW7Fjzt5twTHQy6Mcy4En7mxfwy-fNuaub8qO1EC7aenqP2EB7c-V2u4TwkEEoH6luFDgqgij07MlKqTm7P7UzwMsnTMEWOIwhcLxf07y5gvsfPCIgWKlTf7cjA20y9wufjMPsbk13je6bH5-6qC40f_YxVWIuokk36ZrMGSa9ZhLer1M_kei08fbsDVhISc3D-txQHnVVeRmEuANo709c0JAl3y8-F8cp2cohG0JHgCyMT0TBTkehGU1DabcDu9uDmbJZFzITEI5LrEcWGstIWspTf93CIi5My4lFZc1ZIj3tJMZ1JmhmpdUxRmzmpHeaFTZXXFt2C5aRv3FYhRyqbUUqdqIZzGxkIzjvpuKidx6AjoYt5LEwuSe16M32U4mFBVeqmVXmpllNoIjoYhj301jv913vBTP3SMsz6CnYVwy6ivzyXDYyMXBe54I9gfmlHTvPukalw7xz4INqVSohDf3n_zNnxmPjkihPrtwHL3NHe7CFk6vRfW6j9ktORB |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lb9QwEB5V7aFcoC_EQgFX4kAR2Tp-JPFx-9ICu3uotlI5RfEjEqJNEM1e-PWMHW-EAFW9RbIdJR6P57Pn8QG8M9ZVaPfrJPdEgIJRmai6som0eW1zZRwPF_rzRTa9Fp9v5M0GfBxyYZxzIfjMjf1j8OXb1qz8VdmJEmg_PUXtFtp9mfbZWsONSqCQCOS3DB8SxBV5dGOmVJ0sz76e4nGQpWOOKEMUvlwo7uBZxnyRnT9sUiBZ-WdnDubm8hnM1x_aR5l8H686PTa__qrh-Ng_2YGnEXeSSb9QdmHDNXuwGzX7nryP5aeP92ExISEr9661OOC86ioSswnQypG-wjlBqEtm384nV8kpmkFLgi-ATEzPRUGu1mFJbXMA15cXy7NpElkXEoNQrkscF8ZKW8haeuPPLWJCzoxLacVVLTkiLs10JmVmqNY1RXHmrHaUFzpVVlf8OWw2beNeADFK2ZRa6lQthNPYWGjGUeNN5SQOHQFdz3tpYklyz4xxW4ajCVWll1rppVZGqY3gwzDkR1-P46HO-37qh45x1kdwuBZuGTX2vmR4cOSiwD1vBEdDM-qad6BUjWtX2McvO6VEIV7-_81vYXu6nM_K2afFl1fwhPlUiRD4dwib3c-Ve40AptNvwrr9DXhc54o |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=A+Multimodal+Data+Processing+System+for+LiDAR-Based+Human+Activity+Recognition&rft.jtitle=IEEE+transactions+on+cybernetics&rft.au=Roche%2C+Jamie&rft.au=De-Silva%2C+Varuna&rft.au=Hook%2C+Joosep&rft.au=Moencks%2C+Mirco&rft.date=2022-10-01&rft.issn=2168-2275&rft.eissn=2168-2275&rft.volume=52&rft.issue=10&rft.spage=10027&rft_id=info:doi/10.1109%2FTCYB.2021.3085489&rft.externalDBID=NO_FULL_TEXT |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2168-2267&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2168-2267&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2168-2267&client=summon |