화학공학소재연구정보센터
SIAM Journal on Control and Optimization, Vol.46, No.2, 541-561, 2007
Performance bounds in L-p-norm for approximate value iteration
Approximate value iteration ( AVI) is a method for solving large Markov decision problems by approximating the optimal value function with a sequence of value function representations V-n processed according to the iterations Vn+1 = ATV(n), where T is the so-called Bellman operator and A an approximation operator, which may be implemented by a supervised learning ( SL) algorithm. Usual bounds on the asymptotic performance of AVI are established in terms of the L-infinity-norm approximation errors induced by the SL algorithm. However, most widely used SL algorithms ( such as least squares regression) return a function ( the best fit) that minimizes an empirical approximation error in L-p-norm ( p >= 1). In this paper, we extend the performance bounds of AVI to weighted L-p-norms, which enables us to directly relate the performance of AVI to the approximation power of the SL algorithm, hence assuring the tightness and practical relevance of these bounds. The main result is a performance bound of the resulting policies expressed in terms of the L-p-norm errors introduced by the successive approximations. The new bound takes into account a concentration coefficient that estimates how much the discounted future-state distributions starting from a probability measure used to assess the performance of AVI can possibly differ from the distribution used in the regression operation. We illustrate the tightness of the bounds on an optimal replacement problem.