Deep Reinforcement-Learning-Based Adaptive Traffic Signal Control with Real-Time Queue Lengths
The reinforcement learning (RL) with deep neural network, as a data-driven approach, is promising for adaptive traffic signal control (ATSC) in traffic scenarios. The majority of the existing studies focus on designing efficient agents and policy optimization for ATSC, but neglect to observe more de...
Saved in:
Published in | 2022 IEEE International Conference on Systems, Man, and Cybernetics (SMC) pp. 1760 - 1765 |
---|---|
Main Authors | , , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
09.10.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | The reinforcement learning (RL) with deep neural network, as a data-driven approach, is promising for adaptive traffic signal control (ATSC) in traffic scenarios. The majority of the existing studies focus on designing efficient agents and policy optimization for ATSC, but neglect to observe more detailed states of the environment. In this paper, an adaptive traffic signal control strategy, named as A2C RTQL, is proposed for scheduling the traffic signal in an intersection, by combining the real-time lane-based queue lengths with deep RL agent. First, the Lighthill-Whitham-Richards (LWR) shockwave theory is employed for obtaining the real-time queue lengths in each lane. After that, by defining the obtained queue lengths as the inputs, A2C RTQL strategy is designed for traffic signal control based on the advanced actor-critic (A2C) agent, where the lanes are divided into multiple parallel environments based on the phases of traffic signal. Simulation results demonstrate the optimality and efficiency of the proposed strategy compared with other methods in SUMO under simulated peak-hour traffic dynamics. |
---|---|
ISSN: | 2577-1655 |
DOI: | 10.1109/SMC53654.2022.9945292 |