- Previous Article
- Next Article
- Table of Contents
SIAM Journal on Control and Optimization, Vol.42, No.4, 1143-1166, 2003
On actor-critic algorithms
In this article, we propose and analyze a class of actor-critic algorithms. These are two-time-scale algorithms in which the critic uses temporal difference learning with a linearly parameterized approximation architecture, and the actor is updated in an approximate gradient direction, based on information provided by the critic. We show that the features for the critic should ideally span a subspace prescribed by the choice of parameterization of the actor. We study actor-critic algorithms for Markov decision processes with Polish state and action spaces. We state and prove two results regarding their convergence.
Keywords:reinforcement learning;Markov decision processes;actor-critic algorithms;stochastic approximation