A Method of Vision Aided GNSS Positioning Using Semantic Information in Complex Urban Environment

High-precision localization through multi-sensor fusion has become a popular research direction in unmanned driving. However, most previous studies have performed optimally only in open-sky conditions; therefore, high-precision localization in complex urban environments required an urgent solution....

Full description

Saved in:
Bibliographic Details
Published inRemote sensing (Basel, Switzerland) Vol. 14; no. 4; p. 869
Main Authors Zhai, Rui, Yuan, Yunbin
Format Journal Article
LanguageEnglish
Published Basel MDPI AG 01.02.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:High-precision localization through multi-sensor fusion has become a popular research direction in unmanned driving. However, most previous studies have performed optimally only in open-sky conditions; therefore, high-precision localization in complex urban environments required an urgent solution. The complex urban environments employed in this study include dynamic environments, which result in limited visual localization performance, and highly occluded environments, which yield limited global navigation satellite system (GNSS) performance. In order to provide high-precision localization in these environments, we propose a vision-aided GNSS positioning method using semantic information by integrating stereo cameras and GNSS into a loosely coupled navigation system. To suppress the effect of dynamic objects on visual positioning accuracy, we propose a dynamic-simultaneous localization and mapping (Dynamic-SLAM) algorithm to extract semantic information from images using a deep learning framework. For the GPS-challenged environment, we propose a semantic-based dynamic adaptive Kalman filtering fusion (S-AKF) algorithm to develop vision aided GNSS and achieve stable and high-precision positioning. Experiments were carried out in GNSS-challenged environments using the open-source KITTI dataset to evaluate the performance of the proposed algorithm. The results indicate that the dynamic-SLAM algorithm improved the performance of the visual localization algorithm and effectively suppressed the error spread of the visual localization algorithm. Additionally, after vision was integrated, the loosely-coupled navigation system achieved continuous high-accuracy positioning in GNSS-challenged environments.
ISSN:2072-4292
2072-4292
DOI:10.3390/rs14040869