AIChE Journal, Vol.55, No.4, 919-930, 2009
Approximate Dynamic Programming Based Optimal Control Applied to an Integrated Plant with a Reactor and a Distillation Column with Recycle
An approximate dynamic programming (ADP) method has shown good performance in solving optimal control problems in many small-scale process control applications. The offline computational procedure of ADP constructs an approximation of the optimal "cost-to-go" function, which parameterizes the optimal control policy with respect to the state variable. With the approximate "cost-to-go" function computed, a multi-stage optimization problem that needs to be solved online at every sample time can be reduced to a single-stage optimization, thereby significantly lessening the real-time computational load. Moreover, stochastic uncertainties can be addressed relatively easily within this framework. Nonetheless, the existing ADP method requires excessive offline computation when applied to a high-dimensional system. A case study of a reactor and a distillation column with recycle was used to illustrate this issue. Then, several ways were proposed to reduce the computational load so that the ADP method can be applied to high-dimensional integrated plants. The results showed that the approach is much more superior to NMPC in both deterministic and stochastic cases. (C) 2009 American Institute of Chemical Engineers AIChE J, 55: 919-930, 2009
Keywords:approximate dynamic programming;nonlinear optimal control;integrated plant;dynamic optimization;nonlinear model predictive control