Layering Laser Rangefinder Points onto Images for Obstacle Avoidance by a Neural Network

Obstacle avoidance is essential to autonomous robot navigation but maneuvering around an obstacle causes the system to deviate from its normal path. Oftentimes, these deviations cause the robot to enter new regions which lack the path's usual or meaningful features. This is problematic for visi...

Full description

Saved in:
Bibliographic Details
Published in2019 SoutheastCon pp. 1 - 6
Main Authors Ges, Nicholas P., Anderson, Will C., Lowrance, Christopher J.
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.04.2019
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Obstacle avoidance is essential to autonomous robot navigation but maneuvering around an obstacle causes the system to deviate from its normal path. Oftentimes, these deviations cause the robot to enter new regions which lack the path's usual or meaningful features. This is problematic for vision-based steering controllers, including convolutional neural networks (CNNs), which depend on patterns to be present in camera images. The absence of a path fails to provide consistent and noticeable patterns for the neural network, and this usually leads to erroneous steering commands. In this paper, we mitigate this problem by superimposing points from a two-dimensional (2D) scanning laser rangefinder (LRF) onto camera images using the Open Source Computer Vision (OpenCV) library. The visually encoded LRF data provides the CNN with a new pattern to recognize, aiding in the avoidance of obstacles and the rediscovery of its path. In contrast, existing approaches to robot navigation do not use a single CNN to perform line-following and obstacle avoidance. Using our approach, we were able to train a CNN to follow a lined path and avoid obstacles with a reliability rate of nearly 60% on a complex course and over 80% on a simpler course.
ISSN:1558-058X
DOI:10.1109/SoutheastCon42311.2019.9020359