FD-03: Advances in Operator Splitting Techniques
Stream: Variational inequalities, Nash games, game theory, multilevel and dynamic optimization
Chair(s): Mathias Staudigl
On optimal gradient methods and oracle complexity for smooth (possibly strongly) convex minimization
In this talk, I want to present the (definitely) optimal (black-box) gradient method for smooth strongly convex minimization, which we called the Information-Theoretic Exact Method (ITEM). This method was obtained through semidefinite programming, and we humbly believe it provides some nice insights on the topic of accelerated methods. On the way, I will discuss the corresponding (matching) lower complexity bounds and a constructive procedure for obtaining them, as well as a few links between accelerated and conjugate gradient methods. This talk is based on joint works with Yoel Drori.
The sample complexity of level set estimation
François Bachoc, Tommaso Cesari, Sébastien Gerchinovitz
The problem of approximating the level set of a real function f of many real variables arises naturally in many fields, including physics, engineering, and computer science. We investigate the minimum number of evaluations of f needed to output a guaranteed approximation of such level set. We study the general case of Lipschitz functions f as well as other, smaller function spaces. In all the cases we consider, we show that such sample complexity is exponential in the number of variables. Additionally, we provide algorithms and intuitions on how to attain optimal sample complexity rates.
Multi-block Bregman proximal alternating linearized minimization for structured nonsmooth nonconvex problems
Masoud Ahookhosh, Le Thi Khanh Hien, Nicolas Gillis, Panagiotis Patrinos
We introduce BPALM and A-BPALM, two multi-block proximal alternating linearized minimization algorithms using Bregman distances for the sum of a multi-block relatively smooth function and block separable (nonsmooth) nonconvex functions. Our algorithms are globally convergent to critical points of the objective function under KL assumption, and their rate of convergence is studied for Lojasiewicz-type KL functions. We apply this framework to orthogonal nonnegative matrix factorization (ONMF) that satisfies all of our assumptions and the related subproblems are solved in closed forms.