LLN-SLAM: A Lightweight Learning Network Semantic SLAM

Semantic SLAM is a hot research subject in the field of computer vision in recent years. The mainstream semantic SLAM method can perform real-time semantic extraction. However, under resource-constrained platforms, the algorithm does not work properly. This paper proposes a lightweight semantic LLN-...

Full description

Saved in:
Bibliographic Details
Published inIntelligence Science and Big Data Engineering. Big Data and Machine Learning Vol. 11936; pp. 253 - 265
Main Authors Qu, Xichao, Li, Weiqing
Format Book Chapter
LanguageEnglish
Published Switzerland Springer International Publishing AG 2019
Springer International Publishing
SeriesLecture Notes in Computer Science
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Semantic SLAM is a hot research subject in the field of computer vision in recent years. The mainstream semantic SLAM method can perform real-time semantic extraction. However, under resource-constrained platforms, the algorithm does not work properly. This paper proposes a lightweight semantic LLN-SLAM method for portable devices. The method extracts the semantic information through the matching of the Object detection and the point cloud segmentation projection. In order to ensure the running speed of the program, lightweight network MobileNet is used in the Object detection and Euclidean distance clustering is applied in the point cloud segmentation. In a typical augmented reality application scenario, there is no rule to avoid the movement of others outside the user in the scene. This brings a big error to the visual positioning. So, semantic information is used to assist the positioning. The algorithm does not extract features on dynamic semantic objects. The experimental results show that the method can run stably on portable devices. And the positioning error caused by the movement of the dynamic object can be effectively corrected while establishing the environmental semantic map.
Bibliography:This work was realized by a student. This work is supported by National Key R&D Program of China (2018YFB1004904) and Nation Key Technology Research and Development of china during the “13th Five Year Plan”: 41401010203, 315050502, 31511040202.
ISBN:9783030362034
3030362035
ISSN:0302-9743
1611-3349
DOI:10.1007/978-3-030-36204-1_21