Numerical Methods for Multiobjective Optimization

FB-04: Numerical Methods for Multiobjective Optimization
Stream: Multiobjective Optimization
Room: Lagrange
Chair(s): Leo Warnow, Gabriele Eichfelder

An efficient descent method for locally Lipschitz multiobjective optimization problems
Bennet Gebken
In this talk, we propose a new descent method for MOPs with locally Lipschitz objective functions and afterwards combine it with the subdivision method to compute the whole Pareto set. The descent direction is based on epsilon-subdifferentials, which are iteratively enriched with gradient information until a satisfying direction is found. Combined with a modified Armijo step length, we prove convergence of the method to points that satisfy a necessary condition for Pareto optimality. Finally, the descent method is inserted into the subdivision method to compute the whole Pareto set.

Worst-case complexity bounds of directional direct-search methods for multiobjective derivative-free optimization
Rohollah Garmanjani, Ana Luisa Custodio, Youssef Diouane, Elisa Riccietti
Direct Multisearch (DMS) is a well-established class of algorithms for multiobjective derivative-free optimization. We analyze the worst-case complexity of this class of methods in its most general form, for the class of nonconvex smooth functions. We then focus on a particular instance of DMS, which considers a stricter criterion for accepting new nondominated points and has a worst-case complexity bound similar to the one obtained for gradient descent for the same class of problems (of the order of a threshold raised to the power -2, for driving a criticality measure below this threshold).

A norm-minimization based convex vector optimization algorithm
Muhammad Umer, Firdevs Ulus, Cagin Ararat
We propose an algorithm to generate inner and outer polyhedral approximations to the upper image of a bounded convex vector optimization problem. It is an outer approximation algorithm and is based on solving norm minimizing scalarizations, which differ from Pascolleti-Serafini scalarization in a sense that it does not involve a direction parameter. Therefore, the algorithm is free from direction biasedness. The finiteness of the algorithm is shown, and the convergence rate of the algorithm is studied under additional assumptions.

Computing a box-coverage for multi-objective optimization problems
Leo Warnow, Gabriele Eichfelder
In this talk, we present how to compute a box-coverage of the nondominated set of a continuous multi-objective optimization problem. To do so, we use an approach that, in general, allows us to update not only one but several boxes whenever a new nondominated point is found. We also present some key results for that algorithm. For example we show that we can guarantee an improvement of the boxes in each iteration and that the algorithm terminates in finite time with a finite number of boxes.

Theme: Overlay by Kaira