Intelligent Pump Scheduling Optimization in Water Distribution Networks

In this paper, the authors are concerned with Pump Scheduling Optimization in Water Distribution Networks, targeted on the minimization of the energy costs subject to operational constraints, such as satisfying demand, keeping pressures within certain bounds to reduce leakage and the risk of pipe bu...

Full description

Saved in:
Bibliographic Details
Published inLearning and Intelligent Optimization Vol. 11353; pp. 352 - 369
Main Authors Candelieri, Antonio, Perego, Riccardo, Archetti, Francesco
Format Book Chapter
LanguageEnglish
Published Switzerland Springer International Publishing AG 01.01.2019
Springer International Publishing
SeriesLecture Notes in Computer Science
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In this paper, the authors are concerned with Pump Scheduling Optimization in Water Distribution Networks, targeted on the minimization of the energy costs subject to operational constraints, such as satisfying demand, keeping pressures within certain bounds to reduce leakage and the risk of pipe burst, and keeping reservoir levels within bounds to avoid overflow. Urban water networks are generating huge amounts of data from flow/pressure sensors and smart metering of household consumption. Traditional optimization strategies fail to capture the value hidden in real time data assets. In this paper the authors are proposing a sequential optimization method based on Approximate Dynamic Programming in order to find a control policy defined as a mapping from states of the system to actions, i.e. pump settings. Q-Learning, one of the Approximate Dynamic Programming algorithms, well known in the Reinforcement Learning community, is used. The key difference is that usual Mathematical Programming approaches, including stochastic optimization, requires knowing the water demand in advance or, at least, to have a reliable and accurate forecasting. On the contrary, Approximate Dynamic Programming provides a policy, that is a strategy to decide how to act time step to time step according to the observation of the physical system. Results on the Anytown benchmark network proved that the optimization policy/strategy identified through Approximate Dynamic Programming is robust with respect to modifications of the water demand and, therefore, able to deal with real time data without any distributional assumption.
ISBN:3030053474
9783030053475
ISSN:0302-9743
1611-3349
DOI:10.1007/978-3-030-05348-2_30