화학공학소재연구정보센터
SIAM Journal on Control and Optimization, Vol.54, No.3, 1444-1474, 2016
CONSTRAINED AND UNCONSTRAINED OPTIMAL DISCOUNTED CONTROL OF PIECEWISE DETERMINISTIC MARKOV PROCESSES
The main goal of this paper is to study the in finite-horizon expected discounted continuous-time optimal control problem of piecewise deterministic Markov processes with the control acting continuously on the jump intensity lambda and on the transition measure Q of the process but not on the deterministic flow phi. The contributions of the paper are for the unconstrained as well as the constrained cases. The set of admissible control strategies is assumed to be formed by policies, possibly randomized and depending on the history of the process, taking values in a set valued action space. For the unconstrained case we provide sufficient conditions based on the three local characteristics of the process phi, lambda, Q and the semicontinuity properties of the set valued action space, to guarantee the existence and uniqueness of the integro-differential optimality equation (the so-called Bellman Hamilton Jacobi equation) as well as the existence of an optimal (and delta-optimal, as well) deterministic stationary control strategy for the problem. For the constrained case we show that the values of the constrained control problem and an associated in finite dimensional linear programming (LP) problem are the same, and moreover we provide sufficient conditions for the solvability of the LP problem as well as for the existence of an optimal feasible randomized stationary control strategy for the constrained problem.