IEEE Transactions on Automatic Control, Vol.63, No.11, 3787-3792, 2018
Proper Policies in Infinite-State Stochastic Shortest Path Problems
We consider stochastic shortest path problems with infinite state and control spaces, a nonnegative cost per stage, and a termination state. We extend the notion of a proper policy, a policy that terminates within a finite expected number of steps, from the context of finite state space to the context of infinite state space. We consider the optimal cost function J* and the optimal cost function (J) over cap over just the proper policies. We show that J* and (J) over cap are the smallest and largest solutions of Bellman's equation, respectively, within a suitable class of Lyapounov-like functions. If the cost per stage is bounded, these functions are those that are bounded over the effective domain of (J) over cap. The standard value iteration algorithm may be attracted to either J* or (J) over cap, depending on the initial condition.
Keywords:Dynamic programming;Markov decision processes;stochastic optimal control;stochastic shortest paths (SSPs)