Computers & Chemical Engineering, Vol.57, 24-29, 2013
Barrier NLP methods with structured regularization for optimization of degenerate optimization problems
Barrier nonlinear programming (NLP) solvers exploit sparse Newton-based algorithms and are characterized by fast performance and global convergence properties. This makes them especially suitable for very large process optimization problems. On the other hand, they are frequently challenged by degenerate and indefinite problems, which lead to ill-conditioned Karush-Kuhn-Tucker (KKT) systems. Such problems arise when process optimization models contain linearly dependent constraints, or the reduced Hessian is not positive definite at the solution. This can lead to poor solver performance and may preclude finding successful NLP solutions. Moreover, such optimization models occur in blending problems and NLP subproblems generated by MINLP or global optimization strategies. To deal with these difficulties we present a structured regularization strategy for barrier methods that identifies and excludes dependent constraints in the KKT system while leaving independent constraints unchanged. As a result, more accurate Newton directions can be obtained and much faster convergence can be expected for the KKT system over the conventional regularization approach. Numerical experiments with examples derived from the CUTE and COPS test sets as well as two nonlinear blending problems demonstrate the effectiveness of the proposed method and significantly better performance of the NLP solver. (C) 2013 Elsevier Ltd. All rights reserved.