Non-smoothness, inexactness and applications

TA-01: Non-smoothness, inexactness and applications
Stream: Alternating Direction Method of Multipliers and its Applications
Room: Fermat
Chair(s): Stefano Cipolla

Differentiating non-smooth (minimisation or) saddle point problems
Antonin Chambolle
In recent works with T.Pock, TU Graz, we adapted the “piggyback” method to compute the differential of a loss depending on the solution of a non-smooth saddle point problem. The idea is to estimate the adjoint state by an iterative method parallel to the computation of the solution, rather than inverting of a system depending on the solution. One advantage is that it may be done also for degenerate problems where the inversion is impossible. While working in practice, the method is justified for smooth problems only. We will discuss attempts to generalize it to less smooth situations.

Principled analysis of methods with inexact proximal computations
Mathieu BARRÉ, Adrien Taylor
Proximal operations are among the most common primitives appearing in optimization. In particular, they are crucial tools in many standard splitting methods. We show that worst-case guarantees for algorithms relying on inexact proximal operations can be systematically obtained through a generic procedure based on semidefinite programming. This methodology is primarily based on the approach from (Drori & Teboulle, 2014) and on convex interpolation results, and allows producing non-improvable worst-case analyzes. We illustrate it on various algorithms involving inexact proximal computations.

A Flexible Optimization Framework for Regularized Linearly Coupled Matrix-Tensor Factorizations based on the Alternating Direction Method of Multipliers
Jeremy Cohen, Carla Schenker, Evrim Acar Ataman
Coupled matrix and tensor factorizations (CMTF) are frequently used to jointly analyze data from multiple sources, a task also called data fusion. We propose a flexible algorithmic framework for coupled matrix and tensor factorizations which utilizes Alternating Optimization (AO) and the Alternating Direction Method of Multipliers (ADMM). The framework facilitates the use of a variety of constraints, regularizations, loss functions and couplings with linear transformations in a seamless way. We demonstrate its performance on some simulated and real datasets.

An ADMM-Newton-CNN Numerical Approach to a TV Model for Identifying Discontinuous Diffusion Coefficients in Elliptic Equations: Convex Case with Gradient Observations
Xiaoming Yuan
We consider a TV model for identifying the discontinuous diffusion coefficient in an elliptic equation with observation data of the gradient of the solution, and show that the ADMM can be used effectively. We also show that one of the ADMM subproblems can be well solved by the active-set Newton method along with the Schur complement reduction method, and the other one can be efficiently solved by the deep convolutional neural network (CNN). The resulting ADMM-Newton-CNN approach is demonstrated to be efficient even for higher-dimensional spaces with fine mesh discretization.

Theme: Overlay by Kaira