화학공학소재연구정보센터
SIAM Journal on Control and Optimization, Vol.51, No.5, 3844-3862, 2013
POLICY ITERATION ALGORITHM FOR SINGULAR CONTROLLED DIFFUSION PROCESSES
In this paper, the infinite horizon optimal control problems for singular diffusion processes are considered from the viewpoints of Markov decision processes and perturbation analysis, where the singularity of diffusion means that the covariance matrix of the system noise is allowed to be degenerate. A formula of performance difference under two different controls is derived and leads to a comparison theorem. By the comparison theorem, starting from a control, a so-called better control can be selected. Therefore, a control policy iteration algorithm is developed, by which the performance improves step by step and converges to the optimal one. When this applies to the stochastic affine nonlinear regulator and stochastic linear quadratic optimal control problems, better control can be constructed in a closed form. It is also shown that when the considered stochastic systems degenerate to the deterministic ones, the proposed algorithm reduces to the adaptive dynamic programming algorithm [J. J. Murray, C. J. Cox, G. G. Lendaris, and R. Saeks, Adaptive dynamic programming, IEEE Trans. Systems Man Cybernet., 32 (2002), pp. 140-153] for the affine nonlinear systems and to the well-known Kleinman algorithm [D. L. Kleinman, On an iterative technique for Riccati equation computation, IEEE Trans. Automat. Control, 13 (1968), pp. 114-115] for the linear quadratic optimal control problem.