SIAM Journal on Control and Optimization, Vol.45, No.6, 2169-2206, 2007
Convergent numerical scheme for singular stochastic control with state constraints in a portfolio selection problem
We consider a singular stochastic control problem with state constraints that arises in problems of optimal consumption and investment under transaction costs. Numerical approximations for the value function using the Markov chain approximation method of Kushner and Dupuis are studied. The main result of the paper shows that the value function of the Markov decision problem (MDP) corresponding to the approximating controlled Markov chain converges to that of the original stochastic control problem as various parameters in the approximation approach suitable limits. All our convergence arguments are probabilistic; the main assumption that we make is that the value function be finite and continuous. In particular, uniqueness of the solutions of the associated HJB equations is neither needed nor available ( in the generality under which the problem is considered). Specific features of the problem that make the convergence analysis nontrivial include unboundedness of the state and control space and the cost function; degeneracies in the dynamics; mixed boundary (Dirichlet-Neumann) conditions; and presence of both singular and absolutely continuous controls in the dynamics. Finally, schemes for computing the value function and optimal control policies for the MDP are presented and illustrated with a numerical study.
Keywords:singular control;Hamilton-Jacobi-Bellman equations;portfolio selection;stochastic control;free boundary problem;Skorohod problem