A novel approach for automatic annotation of human actions in 3D point clouds for flexible collaborative tasks with industrial robots

Manual annotation for human action recognition with content semantics using 3D Point Cloud (3D-PC) in industrial environments consumes a lot of time and resources. This work aims to recognize, analyze, and model human actions to develop a framework for automatically extracting content semantics. Mai...

Full description

Saved in:
Bibliographic Details
Published inFrontiers in robotics and AI Vol. 10; p. 1028329
Main Authors Krusche, Sebastian, Al Naser, Ibrahim, Bdiwi, Mohamad, Ihlenfeldt, Steffen
Format Journal Article
LanguageEnglish
Published Switzerland Frontiers Media S.A 15.02.2023
Subjects
Online AccessGet full text
ISSN2296-9144
2296-9144
DOI10.3389/frobt.2023.1028329

Cover

Loading…
Abstract Manual annotation for human action recognition with content semantics using 3D Point Cloud (3D-PC) in industrial environments consumes a lot of time and resources. This work aims to recognize, analyze, and model human actions to develop a framework for automatically extracting content semantics. Main Contributions of this work: 1. design a multi-layer structure of various DNN classifiers to detect and extract humans and dynamic objects using 3D-PC preciously, 2. empirical experiments with over 10 subjects for collecting datasets of human actions and activities in one industrial setting, 3. development of an intuitive GUI to verify human actions and its interaction activities with the environment, 4. design and implement a methodology for automatic sequence matching of human actions in 3D-PC. All these procedures are merged in the proposed framework and evaluated in one industrial Use-Case with flexible patch sizes. Comparing the new approach with standard methods has shown that the annotation process can be accelerated by 5.2 times through automation.
AbstractList Manual annotation for human action recognition with content semantics using 3D Point Cloud (3D-PC) in industrial environments consumes a lot of time and resources. This work aims to recognize, analyze, and model human actions to develop a framework for automatically extracting content semantics. Main Contributions of this work: 1. design a multi-layer structure of various DNN classifiers to detect and extract humans and dynamic objects using 3D-PC preciously, 2. empirical experiments with over 10 subjects for collecting datasets of human actions and activities in one industrial setting, 3. development of an intuitive GUI to verify human actions and its interaction activities with the environment, 4. design and implement a methodology for automatic sequence matching of human actions in 3D-PC. All these procedures are merged in the proposed framework and evaluated in one industrial Use-Case with flexible patch sizes. Comparing the new approach with standard methods has shown that the annotation process can be accelerated by 5.2 times through automation.
Manual annotation for human action recognition with content semantics using 3D Point Cloud (3D-PC) in industrial environments consumes a lot of time and resources. This work aims to recognize, analyze, and model human actions to develop a framework for automatically extracting content semantics. Main Contributions of this work: 1. design a multi-layer structure of various DNN classifiers to detect and extract humans and dynamic objects using 3D-PC preciously, 2. empirical experiments with over 10 subjects for collecting datasets of human actions and activities in one industrial setting, 3. development of an intuitive GUI to verify human actions and its interaction activities with the environment, 4. design and implement a methodology for automatic sequence matching of human actions in 3D-PC. All these procedures are merged in the proposed framework and evaluated in one industrial Use-Case with flexible patch sizes. Comparing the new approach with standard methods has shown that the annotation process can be accelerated by 5.2 times through automation.Manual annotation for human action recognition with content semantics using 3D Point Cloud (3D-PC) in industrial environments consumes a lot of time and resources. This work aims to recognize, analyze, and model human actions to develop a framework for automatically extracting content semantics. Main Contributions of this work: 1. design a multi-layer structure of various DNN classifiers to detect and extract humans and dynamic objects using 3D-PC preciously, 2. empirical experiments with over 10 subjects for collecting datasets of human actions and activities in one industrial setting, 3. development of an intuitive GUI to verify human actions and its interaction activities with the environment, 4. design and implement a methodology for automatic sequence matching of human actions in 3D-PC. All these procedures are merged in the proposed framework and evaluated in one industrial Use-Case with flexible patch sizes. Comparing the new approach with standard methods has shown that the annotation process can be accelerated by 5.2 times through automation.
Author Bdiwi, Mohamad
Al Naser, Ibrahim
Ihlenfeldt, Steffen
Krusche, Sebastian
AuthorAffiliation Department of Production System and Factory Automation , Fraunhofer Institute for Machine Tools and Forming Technology , Chemnitz , Germany
AuthorAffiliation_xml – name: Department of Production System and Factory Automation , Fraunhofer Institute for Machine Tools and Forming Technology , Chemnitz , Germany
Author_xml – sequence: 1
  givenname: Sebastian
  surname: Krusche
  fullname: Krusche, Sebastian
– sequence: 2
  givenname: Ibrahim
  surname: Al Naser
  fullname: Al Naser, Ibrahim
– sequence: 3
  givenname: Mohamad
  surname: Bdiwi
  fullname: Bdiwi, Mohamad
– sequence: 4
  givenname: Steffen
  surname: Ihlenfeldt
  fullname: Ihlenfeldt, Steffen
BackLink https://www.ncbi.nlm.nih.gov/pubmed/36873582$$D View this record in MEDLINE/PubMed
BookMark eNpVUsluFDEQtVAQCUN-gAPykcsMXrrb7QtSFLZIkbjA2XJ7yXRwuxrbPcAH8N94FqLk5HL5Larye4nOIkSH0GtKNpz38p1PMJQNI4xvKGE9Z_IZumBMdmtJm-bsUX2OLnO-J4TQtm-4EC_QOe96wdueXaC_VzjCzgWs5zmBNlvsIWG9FJh0GQ3WMUKpFUQMHm-XSUeszf6e8Rgx_4BnGGPBJsBi84Hsg_s9DsFhAyHoAVKl7xwuOv_I-NdYtpVol1zSqAOuU0DJr9Bzr0N2l6dzhb5_-vjt-sv69uvnm-ur27VpqCxr55l2zFs2EMs8Z53kVLhGai5awr1hvGuMtV3fU0G5GVhrnfCt4Xow3ErHV-jmqGtB36s5jZNOfxToUR0akO6UTnXs4BT1zrS-aVtNZVNtBtORQYqBNISZ3tuq9f6oNS_D5KxxsSQdnog-fYnjVt3BTkkpWl73v0JvTwIJfi4uFzWN2bi6s-hgyYqJngtJOiIr9M1jrweT__9YAewIMAlyTs4_QChR-7yoQ17UPi_qlBf-D9jtt5s
Cites_doi 10.1038/nmeth.2281
10.1016/j.simpa.2022.100278
10.3389/frobt.2022.1001955
10.1109/tpami.2019.2916873
10.1109/TASE.2020.3045655
10.1109/CVPR.2016.213
10.48550/arXiv.1711.09561
10.1007/s11263-019-01255-4
10.1109/tpami.2016.2640292
10.1007/978-3-030-58545-7_12
10.1016/j.cviu.2014.06.015
10.1007/s00371-015-1066-2
10.1109/TFUZZ.2022.3157075
10.1007/s11263-020-01316-z
10.1007/978-3-030-01252-6_26
10.1109/CVPR.2018.00762
10.1145/3132734.3132739
10.1007/978-3-030-01231-1_29
10.1007/s11263-012-0564-1
10.1109/CVPR.2019.00584
ContentType Journal Article
Copyright Copyright © 2023 Krusche, Al Naser, Bdiwi and Ihlenfeldt.
Copyright © 2023 Krusche, Al Naser, Bdiwi and Ihlenfeldt. 2023 Krusche, Al Naser, Bdiwi and Ihlenfeldt
Copyright_xml – notice: Copyright © 2023 Krusche, Al Naser, Bdiwi and Ihlenfeldt.
– notice: Copyright © 2023 Krusche, Al Naser, Bdiwi and Ihlenfeldt. 2023 Krusche, Al Naser, Bdiwi and Ihlenfeldt
DBID AAYXX
CITATION
NPM
7X8
5PM
DOA
DOI 10.3389/frobt.2023.1028329
DatabaseName CrossRef
PubMed
MEDLINE - Academic
PubMed Central (Full Participant titles)
Directory of Open Access Journals
DatabaseTitle CrossRef
PubMed
MEDLINE - Academic
DatabaseTitleList
PubMed
MEDLINE - Academic
CrossRef

Database_xml – sequence: 1
  dbid: DOA
  name: DOAJ Directory of Open Access Journals
  url: https://www.doaj.org/
  sourceTypes: Open Website
– sequence: 2
  dbid: NPM
  name: PubMed
  url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
DocumentTitleAlternate Krusche et al
EISSN 2296-9144
ExternalDocumentID oai_doaj_org_article_1fec5f455a194317bc60b97b0402c8fd
PMC9975387
36873582
10_3389_frobt_2023_1028329
Genre Journal Article
GroupedDBID 53G
5VS
9T4
AAFWJ
AAYXX
ACGFS
ACXDI
ADBBV
AFPKN
ALMA_UNASSIGNED_HOLDINGS
BCNDV
CITATION
GROUPED_DOAJ
KQ8
M~E
OK1
PGMZT
RPM
IAO
ICD
IEA
IPNFZ
ISR
NPM
RIG
7X8
5PM
ID FETCH-LOGICAL-c419t-ef2ae2fd2b0d2f3269317e49a37503fc2364cdd6881713cb25de7f5c3abc3d9e3
IEDL.DBID DOA
ISSN 2296-9144
IngestDate Wed Aug 27 01:22:00 EDT 2025
Thu Aug 21 18:38:45 EDT 2025
Fri Jul 11 01:45:36 EDT 2025
Thu Jan 02 22:52:54 EST 2025
Tue Jul 01 03:44:33 EDT 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Keywords point cloud annotation
robotics
deep learning
data labeling
human activity recognition
Language English
License Copyright © 2023 Krusche, Al Naser, Bdiwi and Ihlenfeldt.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c419t-ef2ae2fd2b0d2f3269317e49a37503fc2364cdd6881713cb25de7f5c3abc3d9e3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
This article was submitted to Robotic Control Systems, a section of the journal Frontiers in Robotics and AI
Edited by: Jose Luis Sanchez-Lopez, University of Luxembourg, Luxembourg
Hang Su, Fondazione Politecnico di Milano, Italy
Reviewed by: Yong-Guk Kim, Sejong University, Republic of Korea
OpenAccessLink https://doaj.org/article/1fec5f455a194317bc60b97b0402c8fd
PMID 36873582
PQID 2783790609
PQPubID 23479
ParticipantIDs doaj_primary_oai_doaj_org_article_1fec5f455a194317bc60b97b0402c8fd
pubmedcentral_primary_oai_pubmedcentral_nih_gov_9975387
proquest_miscellaneous_2783790609
pubmed_primary_36873582
crossref_primary_10_3389_frobt_2023_1028329
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2023-02-15
PublicationDateYYYYMMDD 2023-02-15
PublicationDate_xml – month: 02
  year: 2023
  text: 2023-02-15
  day: 15
PublicationDecade 2020
PublicationPlace Switzerland
PublicationPlace_xml – name: Switzerland
PublicationTitle Frontiers in robotics and AI
PublicationTitleAlternate Front Robot AI
PublicationYear 2023
Publisher Frontiers Media S.A
Publisher_xml – name: Frontiers Media S.A
References Yuen (B73) 2009
Bradski (B9) 2000
Dutta (B20) 2019
Rashid (B57) 2020
Güler (B24) 2018
Gygli (B25) 2020; 128
Hintjens (B28) 2011
Gu (B23) 2018
Carreira (B13) 2019
Kay (B34) 2017
Jin (B32) 2020
Riegler (B59) 2014
Sun (B67) 2019
Gkioxari (B22) 2017
B72
B74
B30
Li (B38) 2018
Das Dawn (B18) 2016; 32
Heilbron (B27) 2015
B39
Barsoum (B5) 2017
Biresaw (B8) 2016
Kuznetsova (B37) 2020; 128
Punnakkal (B54) 2021
B1
Halim (B26) 2022; 9
B2
Kocabas (B35) 2018
B4
Rai (B56) 2021
Su (B65) 2021; 18
Andriluka (B3) 2017
Lucas (B45) 2020
Xiao (B71) 2018
Wang (B70) 2014
Cheng (B14) 2019
da Silva (B16) 2020
Li (B40) 2021
B41
B42
David (B19) 2000
B46
B47
Quan (B55) 2022; 12
B48
Trong (B68) 2017
Liu (B44) 2020; 42
Bianco (B7) 2015; 131
Paszke (B52) 2019
Cai (B10) 2019
Feichtenhofer (B21) 2016
Carreira (B12) 2018
Kreiss (B36) 2021
Papadopoulos (B51) 2016
Su (B66) 2022; 30
B50
B53
B58
B15
B17
Cao (B11) 2016
Bdiwi (B6) 2016
Morais (B49) 2020
Schroeder (B62) 2006
Kabra (B33) 2013; 10
Jang (B31) 2020
Shahroudy (B63) 2016
Hu (B29) 2017; 39
Liu (B43) 2017; 17
Shao (B64) 2020
B60
B61
Vondrick (B69) 2012; 101
References_xml – ident: B60
– start-page: 1451
  year: 2009
  ident: B73
  article-title: LabelMe video: Building a video database with human annotations
– volume-title: We don't need no bounding-boxes: Training object class detectors using only human verification
  year: 2016
  ident: B51
– year: 2017
  ident: B34
  article-title: The kinetics human action video dataset
– ident: B41
– start-page: 8024
  volume-title: Advances in neural information processing systems 32
  year: 2019
  ident: B52
  article-title: PyTorch: An imperative style, high-performance deep learning library
– start-page: 1010
  year: 2016
  ident: B63
  article-title: Ntu RGB+D: A large scale dataset for 3D human activity analysis
– volume: 10
  start-page: 64
  year: 2013
  ident: B33
  article-title: Jaaba: Interactive machine learning for automatic annotation of animal behavior
  publication-title: Nat. Methods
  doi: 10.1038/nmeth.2281
– volume-title: Learning to abstract and predict human actions
  year: 2020
  ident: B49
– volume: 12
  start-page: 100278
  year: 2022
  ident: B55
  article-title: Havptat: A human activity video pose tracking annotation tool
  publication-title: Softw. Impacts
  doi: 10.1016/j.simpa.2022.100278
– volume: 9
  start-page: 1001955
  year: 2022
  ident: B26
  article-title: No-Code robotic programming for agile production: A new markerless-approach for multimodal natural interaction in a human-robot collaboration context
  publication-title: Front. robotics AI
  doi: 10.3389/frobt.2022.1001955
– volume: 42
  start-page: 2684
  year: 2020
  ident: B44
  article-title: Ntu RGB+D 120: A large-scale benchmark for 3D human activity understanding
  publication-title: IEEE Trans. pattern analysis Mach. Intell.
  doi: 10.1109/tpami.2019.2916873
– ident: B50
– year: 2011
  ident: B28
  article-title: 0MQ - the guide
– volume: 18
  start-page: 484
  year: 2021
  ident: B65
  article-title: Toward teaching by demonstration for robot-assisted minimally invasive surgery
  publication-title: IEEE Trans. Autom. Sci. Eng.
  doi: 10.1109/TASE.2020.3045655
– start-page: 356
  year: 2020
  ident: B16
  article-title: Open source multipurpose multimedia annotation tool,” in image analysis and recognition
– volume-title: The VGG image annotator (VIA)
  year: 2019
  ident: B20
– start-page: 2272
  year: 2019
  ident: B10
  article-title: Exploiting spatial-temporal relationships for 3D pose estimation via graph convolutional networks
– ident: B58
– volume-title: Convolutional two-stream network fusion for video action recognition
  year: 2016
  ident: B21
  doi: 10.1109/CVPR.2016.213
– ident: B39
– ident: B61
– year: 2018
  ident: B38
  article-title: CrowdPose: Efficient crowded scenes pose estimation and A new benchmark
– year: 2017
  ident: B5
  article-title: HP-GAN: Probabilistic 3D human motion prediction via gan
  doi: 10.48550/arXiv.1711.09561
– ident: B2
– volume: 128
  start-page: 1061
  year: 2020
  ident: B25
  article-title: Efficient object annotation via speaking and pointing
  publication-title: Int. J. Comput. Vis.
  doi: 10.1007/s11263-019-01255-4
– volume: 39
  start-page: 2186
  year: 2017
  ident: B29
  article-title: Jointly learning heterogeneous features for RGB-D activity recognition
  publication-title: IEEE Trans. pattern analysis Mach. Intell.
  doi: 10.1109/tpami.2016.2640292
– start-page: 6047
  year: 2018
  ident: B23
  article-title: Ava: A video dataset of spatio-temporally localized atomic visual actions
– start-page: 167
  year: 2000
  ident: B19
  article-title: Doermann and David Mihalcik, “Tools and techniques for video performance evaluation
– year: 2020
  ident: B32
  article-title: Whole-body human pose estimation in the wild
  doi: 10.1007/978-3-030-58545-7_12
– ident: B48
– year: 2021
  ident: B36
  article-title: OpenPifPaf: Composite fields for semantic keypoint detection and spatio-temporal association
– volume: 131
  start-page: 88
  year: 2015
  ident: B7
  article-title: An interactive tool for manual, semi-automatic and automatic video annotation
  publication-title: Comput. Vis. Image Underst.
  doi: 10.1016/j.cviu.2014.06.015
– start-page: 295
  year: 2016
  ident: B8
  article-title: ViTBAT: Video tracking and behavior annotation tool
– ident: B17
– start-page: 2613
  year: 2020
  ident: B64
  article-title: FineGym: A hierarchical video dataset for fine-grained action understanding
– year: 2019
  ident: B13
  article-title: A short note on the kinetics-700 human action dataset
– volume: 32
  start-page: 289
  year: 2016
  ident: B18
  article-title: A comprehensive survey of human action recognition with spatio-temporal interest point (STIP) detector
  publication-title: Vis. Comput.
  doi: 10.1007/s00371-015-1066-2
– ident: B30
– volume: 30
  start-page: 1564
  year: 2022
  ident: B66
  article-title: Fuzzy approximation-based task-space control of robot manipulators with remote center of motion constraint
  publication-title: IEEE Trans. Fuzzy Syst.
  doi: 10.1109/TFUZZ.2022.3157075
– year: 2016
  ident: B11
  article-title: Realtime multi-person 2D pose estimation using Part Affinity fields
– volume: 128
  start-page: 1956
  year: 2020
  ident: B37
  article-title: The Open Images Dataset V4: Unified image classification, object detection, and visual relationship detection at scale
  publication-title: Int. J. Comput. Vis.
  doi: 10.1007/s11263-020-01316-z
– year: 2020
  ident: B45
  article-title: A short note on the kinetics-700-2020 human action dataset
– ident: B72
– year: 2018
  ident: B35
  article-title: MultiPoseNet: Fast multi-person pose estimation using pose residual network
  doi: 10.1007/978-3-030-01252-6_26
– start-page: 2500
  year: 2016
  ident: B6
  article-title: Autonomous disassembly of electric vehicle motors based on robot cognition
– volume-title: The OpenCV library
  year: 2000
  ident: B9
– year: 2018
  ident: B24
  article-title: DensePose: Dense human pose estimation in the wild
  doi: 10.1109/CVPR.2018.00762
– ident: B1
– year: 2019
  ident: B14
  article-title: HigherHRNet: Scale-Aware representation learning for bottom-up human pose estimation
– start-page: 2649
  year: 2014
  ident: B70
  article-title: Cross-view action modeling, learning, and recognition
– ident: B47
– volume: 17
  year: 2017
  ident: B43
  article-title: PKU-MMD: A large scale benchmark for skeleton-based human action understanding
  publication-title: VSCC
  doi: 10.1145/3132734.3132739
– start-page: 354
  year: 2020
  ident: B57
  article-title: Local and global sensors for collision avoidance
– start-page: 16261
  year: 2021
  ident: B40
  article-title: UAV-human: A large benchmark for human behavior understanding with unmanned aerial vehicles
– start-page: 10990
  year: 2020
  ident: B31
  article-title: ETRI-Activity3D: A large-scale RGB-D dataset for robots to recognize daily activities of the elderly
– year: 2018
  ident: B12
  article-title: A short note about kinetics-600
– start-page: 961
  year: 2015
  ident: B27
  article-title: ActivityNet: A large-scale video benchmark for human activity understanding
– ident: B42
– year: 2018
  ident: B71
  article-title: Simple baselines for human pose estimation and tracking
  doi: 10.1007/978-3-030-01231-1_29
– ident: B4
– ident: B46
– volume-title: Detecting and recognizing human-object interactions
  year: 2017
  ident: B22
– start-page: 722
  year: 2021
  ident: B54
  article-title: Black max planck institute for intelligent systems, and universität konstanz, “BABEL: Bodies, action and behavior with English labels
– start-page: 11179
  year: 2021
  ident: B56
  article-title: Home action genome: Cooperative compositional action understanding
– volume: 101
  start-page: 184
  year: 2012
  ident: B69
  article-title: Efficiently scaling up crowdsourced video annotation
  publication-title: Int. J. Comput. Vis.
  doi: 10.1007/s11263-012-0564-1
– year: 2017
  ident: B3
  article-title: PoseTrack: A benchmark for human pose estimation and tracking
– ident: B15
– year: 2019
  ident: B67
  article-title: Deep high-resolution representation learning for human pose estimation
  doi: 10.1109/CVPR.2019.00584
– ident: B74
– ident: B53
– volume-title: The visualization toolkit‐an object-oriented approach to 3D graphics
  year: 2006
  ident: B62
– year: 2014
  ident: B59
  article-title: Mathias lux, vincent charvillat, axel carlier, raynor vliegendhart, and martha larson, “VideoJot: A multifunctional video annotation tool
– start-page: 411
  volume-title: Lecture notes in computer science, computational science and its applications – iccsa 2017
  year: 2017
  ident: B68
  article-title: A comprehensive survey on human activity prediction
SSID ssj0001584377
Score 2.2113016
Snippet Manual annotation for human action recognition with content semantics using 3D Point Cloud (3D-PC) in industrial environments consumes a lot of time and...
SourceID doaj
pubmedcentral
proquest
pubmed
crossref
SourceType Open Website
Open Access Repository
Aggregation Database
Index Database
StartPage 1028329
SubjectTerms data labeling
deep learning
human activity recognition
point cloud annotation
robotics
Robotics and AI
Title A novel approach for automatic annotation of human actions in 3D point clouds for flexible collaborative tasks with industrial robots
URI https://www.ncbi.nlm.nih.gov/pubmed/36873582
https://www.proquest.com/docview/2783790609
https://pubmed.ncbi.nlm.nih.gov/PMC9975387
https://doaj.org/article/1fec5f455a194317bc60b97b0402c8fd
Volume 10
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1Lb9QwELZQT-WAaHktBTRI3FBo1nHs-FgeVYUEJyr1ZvkpVqzsqsn2H_C_Gdu7JYuQuHBNMrLjmWS-sWe-IeQN92GQDIMcwyRrGA2skaanTdAcna1FRFL2O7585ReX7PNVfzVr9ZVzwio9cF2402Xwtg-s7zWG2-jsjOWtkcKg8VE7BJf_vujzZsFUrQ8eWCdErZLBKEyehptkcu4k7d4Vn1ow5W9PVAj7_4Yy_0yWnHmf84fkwRY2wlmd7hG55-MxuT8jE3xEfp5BTLd-DTuacEA8CnozpcLKCjrGVM_dIQUovfmgVjWMsIrQfYTrtIoT2HXauLEIh8yWadYeZtZy62HS448R8g4uCu46fwC-eZrGx-Ty_NO3DxfNtslCY9lSTo0PVHsaHDWtowHBnMRF9kzqLp9wBpsJ5q1zfBiWGM9aQ3vnRehtp43tnPTdE3IQU_TPCFjuggjBeY4yziL0cTZIjQKsHTR3C_J2t-DqunJpKIxBsnpUUY_K6lFb9SzI-6yTuyczD3a5gNahttah_mUdC_J6p1GF300-DNHRp82ococRIVve4kBPq4bvhur4IHIF8YKIPd3vzWX_Tlx9L9zcMhcqD-L5_5j8CTnMC5JzxJf9C3Iw3Wz8S4RAk3lVrP0XqFoJEw
linkProvider Directory of Open Access Journals
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=A+novel+approach+for+automatic+annotation+of+human+actions+in+3D+point+clouds+for+flexible+collaborative+tasks+with+industrial+robots&rft.jtitle=Frontiers+in+robotics+and+AI&rft.au=Krusche%2C+Sebastian&rft.au=Al+Naser%2C+Ibrahim&rft.au=Bdiwi%2C+Mohamad&rft.au=Ihlenfeldt%2C+Steffen&rft.date=2023-02-15&rft.issn=2296-9144&rft.eissn=2296-9144&rft.volume=10&rft.spage=1028329&rft_id=info:doi/10.3389%2Ffrobt.2023.1028329&rft.externalDBID=NO_FULL_TEXT
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2296-9144&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2296-9144&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2296-9144&client=summon