Visual Navigation of Mobile Robots in Complex Environments Based on Distributed Deep Reinforcement Learning

The increasingly popular method of deep reinforce- ment learning can not only help mobile robots output accurate actions in complex environments but can also search for collisionfree paths. In this paper, a robot visual navigation model in complex environments based on distributed deep reinforcement...

Full description

Saved in:
Bibliographic Details
Published in2022 6th Asian Conference on Artificial Intelligence Technology (ACAIT) pp. 1 - 5
Main Authors Zhang, Yi, Yang, Zhile, Zhu, Zihan, Feng, Wei, Zhou, Zhaokun, Wang, Weijun
Format Conference Proceeding
LanguageEnglish
Published IEEE 09.12.2022
Subjects
Online AccessGet full text
DOI10.1109/ACAIT56212.2022.10137974

Cover

More Information
Summary:The increasingly popular method of deep reinforce- ment learning can not only help mobile robots output accurate actions in complex environments but can also search for collisionfree paths. In this paper, a robot visual navigation model in complex environments based on distributed deep reinforcement learning is proposed. According to the characteristics of different regions in the complex environment, the environment is divided into several regions, and we proposed method can realize visual navigation in large scene complex environments. In these regions, we combine long-short term memory (LSTM) and proximal policy optimization (PPO) algorithms as a local visual navigation model and design a new reward function that trains the target through factors such as the action of mobile robots, the distance between robots and the target, and the running time of robots. We create respective experience pool independently through model training. The model of robot visual navigation via distributed deep reinforcement learning uses the RGB-D image obtained from the first perspective of mobile robots and the polar coordinates of the target in mobile robots coordinate system as input, and the continuous motion of mobile robots as output to realize the task of end-to-end visual navigation without maps. Our model can complete accurately robot visual navigation in large complex scenes without maps and human intervention. In our experiments, we verify our proposed model by performing the promising navigation tasks in virtual environments.
DOI:10.1109/ACAIT56212.2022.10137974