A Deep Reinforcement Learning Based Collision Avoidance Algorithm for USV in Narrow Channel

The utilization of deep reinforcement learning (DRL) algorithms presents a viable approach to addressing the collision avoidance problem for unmanned surface vehicles (USVs) in complex environments. However, DRL-based algorithms are subject to certain limitations, which including difficulties in con...

Full description

Saved in:
Bibliographic Details
Published inInternational Symposium on Autonomous Systems (Online) pp. 1 - 6
Main Authors Lin, Yuchang, Song, Rui, Qu, Dong
Format Conference Proceeding
LanguageEnglish
Published IEEE 23.05.2025
Subjects
Online AccessGet full text
ISSN2996-3850
DOI10.1109/ICAISISAS64483.2025.11051855

Cover

Loading…
More Information
Summary:The utilization of deep reinforcement learning (DRL) algorithms presents a viable approach to addressing the collision avoidance problem for unmanned surface vehicles (USVs) in complex environments. However, DRL-based algorithms are subject to certain limitations, which including difficulties in concurrently managing obstacle avoidance and channel edge tasks due to insufficient exploration of environmental information, thus reduced the availability of generated path. This study proposes a Soft Actor-Critic (SAC) based dynamic obstacle avoidance algorithm that optimizes the path-planning strategy of a USVs model using the DRL algorithm, thereby achieving efficient two-dimensional position control. The proposed approach integrates the basic SAC algorithm with the artificial potential field method, thus facilitates the generation of a faster and smoother obstacle avoidance path, which mitigating the limitations of traditional DRL-based algorithms. Simulation results demonstrate that the proposed algorithm effectively avoids obstacles in narrow channel environments, thereby enhancing the autonomous navigation capability and safety of the USVs model.
ISSN:2996-3850
DOI:10.1109/ICAISISAS64483.2025.11051855