Multi-Input Autonomous Driving based on Deep Reinforcement Learning with Double Bias Experience Replay
It is still a challenge to realize safe and fast autonomous driving through deep reinforcement learning. Most autonomous driving reinforcement learning models are subject to a single experience replay approach for training agents and how to improve the driving speed and safety of agent has become th...
Saved in:
Published in | IEEE sensors journal Vol. 23; no. 11; p. 1 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
01.06.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | It is still a challenge to realize safe and fast autonomous driving through deep reinforcement learning. Most autonomous driving reinforcement learning models are subject to a single experience replay approach for training agents and how to improve the driving speed and safety of agent has become the focus of research. Therefore, we present an improved Double Bias Experience Replay (DBER) approach, which enables the agent to choose its own driving learning tendency. A new loss function is proposed to ameliorate the relationship between negative loss and positive loss. The proposed approach has been applied to three algorithms to verify: Deep Q Network (DQN), Dueling Double DQN (DD-DQN) and Quantile Regression DQN (QR-DQN). Compared with the existing approaches, the proposed approach show better performance and robustness of driving policy on the driving simulator, which is implemented by the Unity ML-agents. The approach makes the vehicle agent obtain better performance, such as higher reward, faster driving speed, less lane changing and more in the same training time. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ISSN: | 1530-437X 1558-1748 |
DOI: | 10.1109/JSEN.2023.3237206 |