Advances in Douglas-Rachford method – Part I

TD-02: Advances in Douglas-Rachford method – Part I
Stream: Advances in mathematical optimization for machine learning
Room: Turing
Chair(s): Cong Bang Vu, Dimitri Papadimitriou


Split-Douglas-Rachford for composite monotone inclusions and Split-ADMM
Luis Briceño-Arias, Fernando Roldán
We provide a generalization of the Douglas-Rachford splitting (DRS) for solving monotone inclusions in a real Hilbert space involving a general linear operator. The proposed method activates the linear operator separately from the monotone operators appearing in the inclusion and, in the simplest case when the linear operator is the identity, it reduces to classical DRS. Moreover, the weak convergence of primal-dual sequences to a point in the extended solution set is guaranteed, generalizing Svaiter (2011). As in Gabay (1983), we derive a new Split-ADMM in the convex optimization setting.

Anderson Accelerated Douglas-Rachford Splitting
Anqi Fu, Junzi Zhang, Stephen Boyd
We consider the problem of nonsmooth convex optimization with linear equality constraints, where the objective function is only accessible through its proximal operator. To solve it, we propose an Anderson accelerated Douglas-Rachford splitting (A2DR) algorithm, which we show either globally converges or provides a certificate of infeasibility/unboundedness. Applied to a block separable objective, A2DR partially decouples so its steps may be carried out in parallel, yielding an algorithm that is highly scalable. We describe an open-source implementation along with a variety of applications.

Douglas-Rachford splitting and ADMM for nonconvex optimization: Accelerated and Newton-type algorithms
Lorenzo Stella, Andreas Themelis, Panagiotis Patrinos
We propose a new line-search extension to Douglas-Rachford splitting (DRS) and ADMM, that accelerates them with quasi-Newton directions. The algorithms are suited for nonconvex problems, require the same black-box oracle as the original methods, and maintain their (subsequential) convergence. Experiments show that using L-BFGS and Anderson acceleration directions greatly improves convergence over vanilla DRS and ADMM, making them more robust to ill-conditioning. Under regularity and nondegeneracy assumptions at the limit point, superlinear convergence is shown when using Broyden directions.

Theme: Overlay by Kaira