IEEE Transactions on Automatic Control, Vol.50, No.11, 1804-1808, 2005
Evolutionary policy iteration for solving Markov decision processes
We propose a novel algorithm called evolutionary policy iteration (EPI) for solving infinite horizon discounted reward Markov decision processes. EPI inherits the spirit of policy iteration but eliminates the need to maximize over the entire action space in the policy improvement step, so it should be most effective for problems with very large action spaces. EPI iteratively generates a "population" or a set of policies such that the performance of the "elite policy" for a population monotonically improves with respect to a defined fitness function. EPI converges with probability one to a population whose elite policy is an optimal policy. EPI is naturally parallelizable and along this discussion, a distributed variant of PI is also studied.
Keywords:(distributed) policy iteration;evolutionary algorithm;genetic algorithm;Markov decision process;parallelization