Automatica, Vol.95, 385-398, 2018
Multiple stopping time POMDPs: Structural results & application in interactive advertising on social media
This paper considers a multiple stopping time problem for a Markov chain observed in noise, where a decision maker chooses at most L stopping times to maximize a cumulative objective. We formulate the problem as a Partially Observed Markov Decision Process (POMDP) and derive structural results for the optimal multiple stopping policy. The main results are as follows: (i) The optimal multiple stopping policy is shown to be characterized by threshold curves Gamma(l), for l = 1,..., L, in the unit simplex of Bayesian Posteriors. (ii) The stopping sets S-l (defined by the threshold curves Gamma(l)) are shown to exhibit the following nested structure Sl-1 subset of s(l). (iii) The optimal cumulative reward is shown to be monotone with respect to the copositive ordering of the transition matrix. (iv) A stochastic gradient algorithm is provided for estimating linear threshold policies by exploiting the structural results. These linear threshold policies approximate the threshold curves Gamma(l), and share the monotone structure of the optimal multiple stopping policy. (v) Application of the multiple stopping framework to interactively schedule advertisements in live online social media. It is shown that advertisement scheduling using multiple stopping performs significantly better than currently used methods. (C) 2018 Elsevier Ltd, All rights reserved.
Keywords:Partially observed Markov decision process;Multiple stopping time problem;Structural result;Monotone policies;Stochastic approximation;Monotone likelihood ratio dominance;Submodularity;Live social media;Scheduling;Interactive advertisement