A Deep Reinforcement Learning Approach for UAV Path Planning Incorporating Vehicle Dynamics with Acceleration Control

Unmanned aerial vehicles (UAVs) are experiencing a rapid expansion in their applications across various domains, including goods delivery, video capturing, and traffic control. The crucial aspect for UAVs to execute successful target tracking and obstacle avoidance maneuvers lies in the accuracy of...

Full description

Saved in:
Bibliographic Details
Published inUnmanned systems (Singapore) pp. 1 - 22
Main Authors Sabzekar, Sina, Samadzad, Mahdi, Mehditabrizi, Asal, Tak, Ala Nekouvaght
Format Journal Article
LanguageEnglish
Published 01.05.2024
Online AccessGet full text

Cover

Loading…
More Information
Summary:Unmanned aerial vehicles (UAVs) are experiencing a rapid expansion in their applications across various domains, including goods delivery, video capturing, and traffic control. The crucial aspect for UAVs to execute successful target tracking and obstacle avoidance maneuvers lies in the accuracy of their path planning operations. This research paper aims to contribute to the existing body of knowledge by presenting a novel model that incorporates acceleration control, accounting for changing variables such as UAV velocity and altitude, while also incorporating vehicle dynamics. To enhance the realism of the model, we include drag force as a factor. In this study, we focus on exploring the potential of deep reinforcement learning (DRL), specifically the deep deterministic policy gradient (DDPG) algorithm, for modeling a 3D continuous environment with a continuous set of actions. In order to improve the UAV’s performance in executing target tracking and obstacle avoidance maneuvers, we propose an innovative reward function based on the inner product. The training results show that the UAV successfully learns to perform the aforementioned tasks. Also, simulation results demonstrate the superior performance of the proposed UAV modeling and reward function compared to existing works.
ISSN:2301-3850
2301-3869
DOI:10.1142/S2301385024420044