Q learning algorithm based UAV path learning and obstacle avoidence approach

As Unmanned Aerial Vehicle (UAV) having been applied in more complex and adverse environments, the requirements of automatic techniques for obstacle avoidance are becoming more and more important. Reinforcement learning (RL) is a well-known technique in the domain of Machine Learning (ML), which int...

Full description

Saved in:
Bibliographic Details
Published inChinese Control Conference pp. 3397 - 3402
Main Authors Zhao Yijing, Zheng Zheng, Zhang Xiaoyi, Liu Yang
Format Conference Proceeding
LanguageEnglish
Published Technical Committee on Control Theory, CAA 01.07.2017
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:As Unmanned Aerial Vehicle (UAV) having been applied in more complex and adverse environments, the requirements of automatic techniques for obstacle avoidance are becoming more and more important. Reinforcement learning (RL) is a well-known technique in the domain of Machine Learning (ML), which interacts with the environment and learning the knowledge without the requirement of massive priori training samples. Thus it is attractive to implement the idea of RL to support UAV tasks in unknown environments. This paper adopts an Adaptive and Random Exploration approach (ARE) to accomplish both the tasks of UAV navigation and obstacle avoidance. Search mechanisms will be conducted to guide the UAV escape to a proper path. Simulations on different scenarios show that our approach can effectively guide UAVs to reach their targets in quite rational paths.
ISSN:1934-1768
DOI:10.23919/ChiCC.2017.8027884