화학공학소재연구정보센터
SIAM Journal on Control and Optimization, Vol.39, No.2, 533-570, 2000
Law of the iterated logarithm for a constant-gain linear stochastic gradient algorithm
We study almost-sure limiting properties, taken as epsilon SE arrow 0, of the finite horizon sequence of random estimates {theta(0)(epsilon), theta(1)(epsilon), theta(2)(epsilon),..., theta([T/epsilon])(epsilon)} for the linear stochastic gradient algorithm theta(n+1)(epsilon) = theta(n)(epsilon)+epsilon [a(n+1) - (theta(n)(epsilon))' Xn+1]Xn+1, theta(o)(epsilon) (Delta)(=)theta* nonrandom, where T is an element of (0,infinity) is an arbitrary constant, epsilon is an element of (0, 1] is a (small) adaptation gain, and {a(n)} and {X-n} are data sequences which drive the algorithm. These limiting properties are expressed in the form of a functional law of the iterated logarithm.