IEEE Transactions on Automatic Control, Vol.65, No.2, 636-648, 2020
Data-Driven Economic NMPC Using Reinforcement Learning
Reinforcement learning (RL) is a powerful tool to perform data-driven optimal control without relying on a model of the system. However, RL struggles to provide hard guarantees on the behavior of the resulting control scheme. In contrast, nonlinear model predictive control (NMPC) and economic NMPC (ENMPC) are standard tools for the closed-loop optimal control of complex systems with constraints and limitations, and benefit from a rich theory to assess their closed-loop behavior. Unfortunately, the performance of (E)NMPC hinges on the quality of the model underlying the control scheme. In this paper, we show that an (E)NMPC scheme can be tuned to deliver the optimal policy of the real system even when using a wrong model. This result also holds for real systems having stochastic dynamics. This entails that ENMPC can be used as a new type of function approximator within RL. Furthermore, we investigate our results in the context of ENMPC and formally connect them to the concept of dissipativity, which is central for the ENMPC stability. Finally, we detail how these results can be used to deploy classic RL tools for tuning (E)NMPC schemes. We apply these tools on both, a classical linear MPC setting and a standard nonlinear example, from the ENMPC literature.
Keywords:Optimal control;Adaptation models;Stochastic processes;Tools;Stability analysis;Economics;Reinforcement learning;Adaptive nonlinear model predictive control (NMPC);economic NMPC (ENMPC);reinforcement learning;strict dissipativity