Nonlinear Composite and Constrained Optimization – Part I

TA-02: Nonlinear Composite and Constrained Optimization – Part I
Stream: Advances in mathematical optimization for machine learning
Room: Turing
Chair(s): Cong Bang Vu, Dimitri Papadimitriou

Proximal alternating minimization for solving discrete Mumford-Shah and Amborsio-Tortorelli models
Hoang Trieu Vy LE, Marion Foare, Mamadou Ngom, Nelly Pustelnik
This work focuses on a class of imaging problems which can be reformulated as a non-linear composite problem. We derive two proximal alternating minimization schemes with convergence guarantees to estimate critical points. In particular, we place our attention on the discrete Mumford-Shah and the Amborsio-Tortorelli models. We compare their behaviors when the discrete counterpart of the 1D Hausdorff measure is modeled either by an l1-norm or AT penalization. The closed-form expressions of the involved proximity operators are provided.

Random extrapolation for primal-dual coordinate descent
Ahmet Alacaoglu, Olivier Fercoq, Volkan Cevher
We introduce a randomly extrapolated primal-dual coordinate descent method that adapts to sparsity of the data matrix and the favorable structures of the objective function. Our method updates only a subset of primal and dual variables with sparse data, and it uses large step sizes with dense data. We prove linear convergence under metric subregularity for strongly convex-strongly concave problems, and almost sure convergence of the sequence and optimal sublinear convergence rates for the primal-dual gap and objective values, in the general convex-concave case.

Primal-Dual Proximal Splitting Algorithms for Large-Scale Convex Optimization
Laurent Condat, Adil Salim, Peter Richtarik
We study algorithms to minimize a sum of convex functions, possibly composed with linear operators, using their individual gradient or proximity operator. Using two different constructions based on the splitting technique proposed by Davis and Yin in 2017, we recover existing algorithms and discover a new one, which we call PDDY. Different variants, distributed, decentralized, or with stochastic gradient, will be discussed. Moreover, we derive nonergodic sublinear or linear convergence rates, as well as new accelerated versions in presence of strong convexity, using varying stepsizes.

On a Primal-Dual Newton Proximal Method for Convex Quadratic Programs
Alberto De Marchi
We introduce QPDO, a primal-dual method for convex quadratic programs which builds upon the proximal point algorithm and a damped semismooth Newton’s method. The outer proximal regularization yields a numerically stable method, and we interpret the proximal operator as the unconstrained minimization of a suitable merit function. The inner minimization relies on linear systems that are always solvable and exact linesearch. We report on numerical results against state-of-the-art solvers. QPDO proves to be a simple, robust, and efficient numerical method for convex quadratic programming.

Theme: Overlay by Kaira