Proximal Policy Optimization for Volt-VAR Control in Distribution Networks with Renewable Energy Resources

This paper addresses the problem of Volt/VAR control (VVC), which becomes a critical issue with the increasing renewable energy resources integration in power distribution networks. Traditional methods are limited due to the incomplete measurements in such scenarios. Recently, deep reinforcement lea...

Full description

Saved in:
Bibliographic Details
Published in2021 IEEE Sustainable Power and Energy Conference (iSPEC) pp. 677 - 681
Main Authors Zhu, Tao, Hai, Di, Zhou, Shengchao, Zhang, Ruiying, Yan, Ziheng, Wu, Minghe
Format Conference Proceeding
LanguageEnglish
Published IEEE 23.12.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This paper addresses the problem of Volt/VAR control (VVC), which becomes a critical issue with the increasing renewable energy resources integration in power distribution networks. Traditional methods are limited due to the incomplete measurements in such scenarios. Recently, deep reinforcement learning (DRL) methods have emerged and are broadly adopted since it is model-free and computationally efficient. The main objective of VVC is to circumvent voltage violation and to minimize operating costs. In this paper, the VVC problem is formulated as a Markov decision process (MDP) with a penalty term considering the operational constraints of equipment. To stabilize the training process, a progressive policy gradient algorithm called proximal policy optimization (PPO) is implemented. Numerical results conducted on modified IEEE 12-bus and 33-bus system show the benefits of the proposed control method.
DOI:10.1109/iSPEC53008.2021.9735483