Multi-agent Deep Reinforcement Learning Based Approach for Power Command Response Combining Fast and Normal Charging Piles
The rapid growth in the number of electric vehicles (EVs) is conducive to promoting green development. The combination of fast and normal charging can fully leverage the collaborative ability of different types of charging devices. Therefore, a decentralized collaborative charging strategy combining...
Saved in:
Published in | 2024 4th Power System and Green Energy Conference (PSGEC) pp. 458 - 462 |
---|---|
Main Authors | , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
22.08.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | The rapid growth in the number of electric vehicles (EVs) is conducive to promoting green development. The combination of fast and normal charging can fully leverage the collaborative ability of different types of charging devices. Therefore, a decentralized collaborative charging strategy combining multiple fast and normal charging piles considering power command response based on multi-agent deep reinforcement learning (MADRL) is proposed. First, a framework is proposed including power command generation by the distribution system and power command response by multiple charging piles. Then, Minkowski summation is used to build energy boundary model for EV cluster, which can help MADRL algorithm to constraint charging power. Moreover, two Markov Decision Process (MDP) is built, one is built for distribution network system optimization, another one is built for fast and normal charging piles. Finally, case studies are conducted on the training stability, control effect of collaborative objectives, and scalability of the proposed method in different scenarios with multiple fast and normal charging piles of different scales, demonstrating the effectiveness of the proposed method. |
---|---|
DOI: | 10.1109/PSGEC62376.2024.10721196 |