Convolutional Neural Network-Based Robot Navigation Using Uncalibrated Spherical Images

Vision-based mobile robot navigation is a vibrant area of research with numerous algorithms having been developed, the vast majority of which either belong to the scene-oriented simultaneous localization and mapping (SLAM) or fall into the category of robot-oriented lane-detection/trajectory trackin...

Full description

Saved in:
Bibliographic Details
Published inSensors (Basel, Switzerland) Vol. 17; no. 6; p. 1341
Main Authors Ran, Lingyan, Zhang, Yanning, Zhang, Qilin, Yang, Tao
Format Journal Article
LanguageEnglish
Published Switzerland MDPI AG 12.06.2017
MDPI
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Vision-based mobile robot navigation is a vibrant area of research with numerous algorithms having been developed, the vast majority of which either belong to the scene-oriented simultaneous localization and mapping (SLAM) or fall into the category of robot-oriented lane-detection/trajectory tracking. These methods suffer from high computational cost and require stringent labelling and calibration efforts. To address these challenges, this paper proposes a lightweight robot navigation framework based purely on uncalibrated spherical images. To simplify the orientation estimation, path prediction and improve computational efficiency, the navigation problem is decomposed into a series of classification tasks. To mitigate the adverse effects of insufficient negative samples in the “navigation via classification” task, we introduce the spherical camera for scene capturing, which enables 360° fisheye panorama as training samples and generation of sufficient positive and negative heading directions. The classification is implemented as an end-to-end Convolutional Neural Network (CNN), trained on our proposed Spherical-Navi image dataset, whose category labels can be efficiently collected. This CNN is capable of predicting potential path directions with high confidence levels based on a single, uncalibrated spherical image. Experimental results demonstrate that the proposed framework outperforms competing ones in realistic applications.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
This paper is an extended version of our paper published in Ran, L.; Zhang, Y.; Yang, T.; Zhang, P. Autonomous Wheeled Robot Navigation with Uncalibrated Spherical Images. In Chinese Conference on Intelligent Visual Surveillance; Springer: Singapore, 2016; pp. 47–55.
ISSN:1424-8220
1424-8220
DOI:10.3390/s17061341