화학공학소재연구정보센터
IEEE Transactions on Automatic Control, Vol.65, No.6, 2566-2581, 2020
Accelerated Distributed Nesterov Gradient Descent
This paper considers the distributed optimization problem over a network, where the objective is to optimize a global function formed by a sum of local functions, using only local computation and communication. We develop an accelerated distributed Nesterov gradient descent method. When the objective function is convex and L-smooth, we show that it achieves a O(1/t(1.4-epsilon)) convergence rate for all epsilon is an element of (0, 1.4). We also show the convergence rate can be improved to O(1/t(2)) if the objective function is a composition of a linear map and a strongly convex and smooth function. When the objective function is mu-strongly convex and L-smooth, we show that it achieves a linear convergence rate of O([1 - C(mu/L)(5/7)](t)), where L/mu is the condition number of the objective, and C > 0 is some constant that does not depend on L/mu.