Learning-based MPC from Big Data Using Reinforcement Learning

This paper presents an approach for learning Model Predictive Control (MPC) schemes directly from data using Reinforcement Learning (RL) methods. The state-of-the-art learning methods use RL to improve the performance of parameterized MPC schemes. However, these learning algorithms are often gradien...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Sawant, Shambhuraj, Anand, Akhil S, Reinhardt, Dirk, Gros, Sebastien
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 04.01.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This paper presents an approach for learning Model Predictive Control (MPC) schemes directly from data using Reinforcement Learning (RL) methods. The state-of-the-art learning methods use RL to improve the performance of parameterized MPC schemes. However, these learning algorithms are often gradient-based methods that require frequent evaluations of computationally expensive MPC schemes, thereby restricting their use on big datasets. We propose to tackle this issue by using tools from RL to learn a parameterized MPC scheme directly from data in an offline fashion. Our approach derives an MPC scheme without having to solve it over the collected dataset, thereby eliminating the computational complexity of existing techniques for big data. We evaluate the proposed method on three simulated experiments of varying complexity.
ISSN:2331-8422