SIAM Journal on Control and Optimization, Vol.46, No.5, 1683-1704, 2007
Asymptotic convergence analysis of a new class of proximal point methods
Finite dimensional local convergence results for self-adaptive proximal point methods and nonlinear functions with multiple minimizers are generalized and extended to a Hilbert space setting. The principle assumption is a local error bound condition which relates the growth in the function to the distance to the set of minimizers. A local convergence result is established for almost exact iterates. Less restrictive acceptance criteria for the proximal iterates are also analyzed. These criteria are expressed in terms of a subdifferential of the proximal function and either a subdifferential of the original function or an iteration difference. If the proximal regularization parameter mu(X) is sufficiently small and bounded away from zero and f is sufficiently smooth, then there is local linear convergence to the set of minimizers. For a locally convex function, a convergence result similar to that for almost exact iterates is established. For a locally convex solution set and smooth functions, it is shown that if the proximal regularization parameter has the form mu(x) = beta parallel to f'[x]parallel to(n), where n is an element of (0, 2), then the convergence is at least superlinear if n is an element of ( 0, 1) and at least quadratic if n is an element of [ 1, 2).