SIAM Journal on Control and Optimization, Vol.58, No.1, 510-528, 2020
LYAPUNOV EXPONENT OF RANK-ONE MATRICES: ERGODIC FORMULA AND INAPPROXIMABILITY OF THE OPTIMAL DISTRIBUTION
The Lyapunov exponent corresponding to a set of square matrices A = {A(1), ..., A(n)} and a probability distribution p over {1, ..., n} is lambda(A,p) := lim(k ->infinity) 1/k E log parallel to A(sigma k) ... A(sigma 2)A(sigma 1)parallel to, where sigma i( )are independent and identically distributed according to p. This quantity is of fundamental importance to control theory since it determines the asymptotic convergence rate e(lambda(A,p)) of the stochastic linear dynamical system x(k+1) = A(sigma k)x(k). This paper investigates the following "design problem": Given A, compute the distribution p minimizing lambda(A,p). Our main result is that it is NP-hard to decide whether there exists a distribution p for which lambda(A, p) < 0, i.e., it is NP-hard to decide whether this dynamical system can be stabilized. This hardness result holds even in the "simple" case where A contains only rank-one matrices. Somewhat surprisingly, this is in stark contrast to the Joint Spectral Radius the deterministic counterpart of the Lyapunov exponent for which the analogous optimization problem over switching rules is known to be exactly computable in polynomial time for rank-one matrices. To prove this hardness result, we first observe via Birkhoff's Ergodic Theorem that the Lyapunov exponent of rank-one matrices admits a simple formula and in fact is a quadratic form in p. Hardness of the design problem is shown through a reduction from the Independent Set problem. Along the way, simple examples are given illustrating that p bar right arrow lambda (A, p) is neither convex nor concave in general. We conclude with extensions to continuous distributions, exchangeable processes, Markov processes, and stationary ergodic processes.
Keywords:Lyapunov exponent;stochastic linear dynamical system;asymptotic stability;hardness of approximation