We consider a variation of the classical proximal-gradient algorithm for the iterative minimization of a cost function consisting of a sum of two terms, one smooth and the other prox-simple, and whose relative weight is determined by a penalty parameter. This so-called fixed-point continuation method allows one to approximate the problem’s trade-off curve, i.e. to compute the minimizers of the cost function for a whole range of values of the penalty parameter at once. The algorithm is shown to converge, and a rate of convergence of the cost function is also derived. Furthermore, it is shown that this method is related to iterative algorithms constructed on the basis of the e-subdifferential of the prox-simple term. Some numerical examples are provided.
Fest, J., Heikkilä, T., Loris, I., Martin, S., Ratti, L., Rebegoldi, S., et al. (2024). On a Fixed-Point Continuation Method for a Convex Optimization Problem. Singapore : Springer [10.1007/978-981-97-6769-4_2].
On a Fixed-Point Continuation Method for a Convex Optimization Problem
Luca Ratti;Simone Rebegoldi;
2024
Abstract
We consider a variation of the classical proximal-gradient algorithm for the iterative minimization of a cost function consisting of a sum of two terms, one smooth and the other prox-simple, and whose relative weight is determined by a penalty parameter. This so-called fixed-point continuation method allows one to approximate the problem’s trade-off curve, i.e. to compute the minimizers of the cost function for a whole range of values of the penalty parameter at once. The algorithm is shown to converge, and a rate of convergence of the cost function is also derived. Furthermore, it is shown that this method is related to iterative algorithms constructed on the basis of the e-subdifferential of the prox-simple term. Some numerical examples are provided.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


