On the global convergence of randomized coordinate gradient descent for
non-convex optimization
Authors
Chen, Z; Li, Y; Lu, J
Abstract
In this work, we analyze the global convergence property of coordinate
gradient descent with random choice of coordinates and stepsizes for non-convex
optimization problems. Under generic assumptions, we prove that the algorithm
iterate will almost surely escape strict saddle points of the objective
function. As a result, the algorithm is guaranteed to converge to local minima
if all saddle points are strict. Our proof is based on viewing coordinate
descent algorithm as a nonlinear random dynamical system and a quantitative
finite block analysis of its linearization around saddle points.