International Journal of Control, Vol.92, No.7, 1692-1706, 2019
Stochastic differential games and inverse optimal control and stopper policies
In this paper, we consider a two-player stochastic differential game problem over an infinite time horizon where the players invoke controller and stopper strategies on a nonlinear stochastic differential game problem driven by Brownian motion. The optimal strategies for the two players are given explicitly by exploiting connections between stochastic Lyapunov stability theory and stochastic Hamilton-Jacobi-Isaacs theory. In particular, we show that asymptotic stability in probability of the differential game problem is guaranteed by means of a Lyapunov function which can clearly be seen to be the solution to the steady-state form of the stochastic Hamilton-Jacobi-Isaacs equation, and hence, guaranteeing both stochastic stability and optimality of the closed-loop control and stopper policies. In addition, we develop optimal feedback controller and stopper policies for affine nonlinear systems using an inverse optimality framework tailored to the stochastic differential game problem. These results are then used to provide extensions of the linear feedback controller and stopper policies obtained in the literature to nonlinear feedback controllers and stoppers that minimise and maximise general polynomial and multilinear performance criteria.
Keywords:Stochastic differential games;inverse optimal control;Lyapunov functions;stochastic Hamilton-Jacobi-Isaacs equation;polynomial cost functions;multilinear forms