End-to-End Deep Reinforcement Learning for Image-Based UAV Autonomous Control

To achieve the perception-based autonomous control of UAVs, schemes with onboard sensing and computing are popular in state-of-the-art work, which often consist of several separated modules with respective complicated algorithms. Most methods depend on handcrafted designs and prior models with littl...

Full description

Saved in:
Bibliographic Details
Published inApplied sciences Vol. 11; no. 18; p. 8419
Main Authors Zhao, Jiang, Sun, Jiaming, Cai, Zhihao, Wang, Longhong, Wang, Yingxun
Format Journal Article
LanguageEnglish
Published Basel MDPI AG 01.09.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:To achieve the perception-based autonomous control of UAVs, schemes with onboard sensing and computing are popular in state-of-the-art work, which often consist of several separated modules with respective complicated algorithms. Most methods depend on handcrafted designs and prior models with little capacity for adaptation and generalization. Inspired by the research on deep reinforcement learning, this paper proposes a new end-to-end autonomous control method to simplify the separate modules in the traditional control pipeline into a single neural network. An image-based reinforcement learning framework is established, depending on the design of the network architecture and the reward function. Training is performed with model-free algorithms developed according to the specific mission, and the control policy network can map the input image directly to the continuous actuator control command. A simulation environment for the scenario of UAV landing was built. In addition, the results under different typical cases, including both the small and large initial lateral or heading angle offsets, show that the proposed end-to-end method is feasible for perception-based autonomous control.
ISSN:2076-3417
2076-3417
DOI:10.3390/app11188419