Safe reinforcement learning for real-time automatic control in a smart energy-hub
Nowadays, multi-energy systems are receiving special attention from smart grid community owing to their high flexibility potentials integrating with multiple energy carriers. In this regard, energy hub is known as a flexible and efficient platform to supply energy demands with an acceptable range of...
Saved in:
Published in | Applied energy Vol. 309; p. 118403 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
Elsevier Ltd
01.03.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Nowadays, multi-energy systems are receiving special attention from smart grid community owing to their high flexibility potentials integrating with multiple energy carriers. In this regard, energy hub is known as a flexible and efficient platform to supply energy demands with an acceptable range of affordability and reliability by relying on various energy production, storage and conversion facilities. Given the increasing penetration of renewable energy sources to promote a low-carbon energy transition, accurate economic and environmental assessment of energy hub, along with the real-time automatic energy management scheme has become a challenging task due to the high variability of renewable energy sources. Furthermore, the conventional model-based optimization approach requiring full knowledge of the employed mathematical operating models and accurate uncertainty distributions may become impractical for real-world applications. In this context, this paper proposes a model-free safe deep reinforcement learning method for the optimal control of a renewable-based energy hub operating in multiple energy carries while satisfying the physical constraints within the energy hub operation model. The main objective of this work is to minimize the system energy cost and carbon emission by considering various energy components. The proposed deep reinforcement learning method is trained and tested on a real-world dataset to validate its superior performance in reducing energy cost, carbon emission, and computational time with respect to the state-of-the-art deep reinforcement learning and optimized-based approaches. Moreover, the effectiveness of the proposed method in dealing with model operation constraints is evaluated on both training and test environments. Finally, the generalization performance for the learnt energy management scheme as well as the sensitivity analysis on storage flexibility and carbon price are also examined in the case studies.
•A smart energy-hub integrating with multi-energy carriers is investigated.•A novel safe deep reinforcement learning method is proposed.•The proposed method deals effectively with physical constraints.•The proposed method achieves energy cost and carbon emission benefits. |
---|---|
ISSN: | 0306-2619 1872-9118 |
DOI: | 10.1016/j.apenergy.2021.118403 |