Monocular Visual Simultaneous Localization and Mapping: (R)Evolution From Geometry to Deep Learning-Based Pipelines

With the rise of deep learning, there is a fundamental change in visual simultaneous localization and mapping (SLAM) algorithms toward developing different modules trained as end-to-end pipelines. However, regardless of the implementation domain, visual SLAM's performance is subject to diverse...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on artificial intelligence Vol. 5; no. 5; pp. 1990 - 2010
Main Authors Alvarez-Tunon, Olaya, Brodskiy, Yury, Kayacan, Erdal
Format Journal Article
LanguageEnglish
Published IEEE 01.05.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:With the rise of deep learning, there is a fundamental change in visual simultaneous localization and mapping (SLAM) algorithms toward developing different modules trained as end-to-end pipelines. However, regardless of the implementation domain, visual SLAM's performance is subject to diverse environmental challenges, such as dynamic elements in outdoor environments, harsh imaging conditions in underwater environments, or blurriness in high-speed setups. These environmental challenges need to be identified to study the real-world viability of SLAM implementations. Motivated by the aforementioned challenges, this article surveys the current state of visual SLAM algorithms according to the two main frameworks: geometry-based and learning-based SLAM. First, we introduce a general formulation of the SLAM pipeline that includes most of the implementations in the literature. Second, those implementations are classified and surveyed for geometry and learning-based SLAM. After that, environment-specific challenges are formulated to enable experimental evaluation of the resilience of different visual SLAM classes to varying imaging conditions. We address two significant issues in surveying visual SLAM, providing a consistent classification of visual SLAM pipelines and a robust evaluation of their performance under different deployment conditions. Finally, we give our take on future opportunities for visual SLAM implementations.
ISSN:2691-4581
2691-4581
DOI:10.1109/TAI.2023.3321032