화학공학소재연구정보센터
IEEE Transactions on Automatic Control, Vol.59, No.10, 2796-2800, 2014
An Argument for the Bayesian Control of Partially Observable Markov Decision Processes
This technical note concerns the control of partially observable Markov decision processes characterized by a prior distribution over the underlying hidden Markov model parameters. In such instances, the control problem is commonly simplified by first choosing a point estimate from the model prior, and then selecting the control policy that is optimal with respect to the point estimate. Our contribution is to demonstrate, through a tractable yet nontrivial example, that even the best control policies constructed in this manner can significantly underperform the Bayes optimal policy. While this is an operative assumption in the Bayes-adaptive Markov decision process literature, to our knowledge no such illustrative example has been formally proposed.