Distributed Derivative-Free Optimization
There are a number of situations in which multiple agents are seeking to cooperatively solve a central optimization problem using a combination of computations using local information and communication across a network modeled as a graph. In contexts arising from hyperparameter tuning or the cooperative control of functions defined by simulations, these problems are derivative free. In this talk we study the classical DFO algorithms adapted for distributed optimization. The theoretical properties and numerical performance is presented and assessed.
Derivative-free optimization for large-scale structured problems
Andrea Cristofari, Francesco Rinaldi
In this talk, a derivative-free algorithm is proposed to minimize a black-box objective function over the convex hull of a given set of points. At each iteration, the approach suitably handles a reduced problem, based on an inner approximation of the feasible set. This inner approximation is dynamically updated by using rules that in general allow us to keep the dimension of the problem small. Theoretical convergence results and numerical comparison with state-of-the-art DFO solvers will be presented.
A stochastic Levenberg-Marquardt method using random models
Clément Royer, El houcine Bergou, Youssef Diouane, Vyacheslav Kungurtsev
In this talk, we describe a stochastic Levenberg-Marquardt algorithm that handles noisy objective function values and random models. Provided the probability of accurate function estimates and models is sufficiently large, we are able to endow our proposed framework with global convergence rates. Our results rely on both a specific scaling of the regularization parameter and a measure of criticality tailored to least-squares problems. We compare our approach with several rates available for stochastic algorithms, and discuss the assumptions under which these schemes can be analyzed.
A Derivative-free Adaptation of the Penalty Decomposition Method for Sparse Optimization
Matteo Lapucci, Marco Sciandrone
We consider the problem of minimizing a smooth function with cardinality constraint. A well-known approach to tackle such prolem is the Penalty Decomposition (PD), which requires to exactly solve subproblems in the dimension of the original problem. In this talk, we present a derivative-free PD algorithm, where the exact minimization step is replaced by inexact line searches along suitable sets of directions. We state theoretical convergence properties of the proposed algorithm equivalent to those of the original gradient-based PD method. Finally, we show the results of preliminary experiments