Lyapunov-Inspired Deep Reinforcement Learning for Robot Navigation in Obstacle Environments

The inherent black-box nature of deep reinforcement learning (DRL) poses challenges in ensuring safety constraints. This paper, therefore, introduces a DRL reward design inspired by Lyapunov stability theory for safe robot navigation in the presence of obstacles. The navigation problem is formulated...

Full description

Saved in:
Bibliographic Details
Published in2025 IEEE Symposium on Computational Intelligence on Engineering/Cyber Physical Systems (CIES) pp. 1 - 8
Main Authors Ugurlu, Halil Ibrahim, Redder, Adrian, Kayacan, Erdal
Format Conference Proceeding
LanguageEnglish
Published IEEE 17.03.2025
Subjects
Online AccessGet full text
DOI10.1109/CIES64955.2025.11007627

Cover

Loading…
More Information
Summary:The inherent black-box nature of deep reinforcement learning (DRL) poses challenges in ensuring safety constraints. This paper, therefore, introduces a DRL reward design inspired by Lyapunov stability theory for safe robot navigation in the presence of obstacles. The navigation problem is formulated as a state-space control problem with close obstacle locations integrated into the state representation. To ensure safe obstacle avoidance, we introduce a novel reward-shaping strategy utilizing a Lyapunov function that discourages fast movement toward obstacles. Our numerical experiments demonstrate the effectiveness of the reward design strategy compared to baselines in achieving consistent superior learning with higher mission completion rates while maintaining speeds closer to a desired target speed. In addition, we show that our reward design enables a generally smaller choice for the discount factor for value-function-based DRL algorithms, which can lead to faster convergence. This is possible since the reward design merely penalizes the one-step decay of the Lyapunov function. Furthermore, policy training simulations employ an early episode termination method to constrain exploration and add more valuable samples to the DRL training replay memory. Finally, real-world experiments with a quadrotor validate the ability of our method to safely navigate around varying densities of obstacles. The proposed method consistently takes cautious maneuvers near obstacles by slowing down, achieving greater obstacle clearance compared to baseline, although with an increase in mission completion time.
DOI:10.1109/CIES64955.2025.11007627