Plenary Speakers

Martine Labbé (Université Libre de Bruxelles, Belgium, and INRIA Lille, France)

Linear bilevel optimization: overview and recent results
 
A bilevel optimization problem consists in an optimization problem in which some of the constraints specify that a subset of variables must be an optimal solution to another optimization problem. This paradigm is particularly appropriate to model competition between agents, a leader and a follower, acting sequentially.
In this talk I will  focus on the simplest bilevel problems, those that are linear. I will present the main characteristics, properties and algorithms for these problems.  Then, I will discuss some recent results showing that these problems are already extremely challenging.
This is a joint work with Thomas Kleinert, Martin Schmidt and Fränk Plein.

 

Oliver Stein (Karlsruhe Institute of Technology, Germany)

Granularity — a bridge between continuous and discrete optimization

In this talk we sketch the development of the granularity concept in optimization over the past five
years. Granularity relaxes the difficulties imposed by integrality conditions and often provides ways
for determining good feasible points of mixed-integer optimization problems at low computational cost.

Starting from error bound results for roundings in mixed-integer linear optimization, we illustrate
how this concept unfolded to provide algorithms for the computation of feasible points in mixed-integer
linear, convex and nonconvex optimization. We also comment on the treatment of equality constraints and
explain the integration of the granularity idea into branch-and-bound frameworks.

 

EUROPT Fellow 2020 Lecture by Manlio Gaudioso (Universita della Calabria, Italy)

Nonsmooth Optimization for Classification Problems in Machine Learning.

Classification is a machine learning technique where the objective is to assign each sample of a given population to exactly one from among several possible classes. Samples are represented by numerical vectors in the space of a certain number of attributes (features) and a classifier is a mathematical tool that is designed starting from a dataset of samples with known class membership (the so called training set). Once the classifier has been built up, it is used to predict the class membership for newly incoming samples.

Most of the times classification problems are binary, as the class considered are just two, and constructing the classifier requires solution of a numerical optimization problem to detect an “optimal” separation surface in the sample space. Effective classification methods (e.g. the classic Support Vector Machine one) are based on the use of separating hyperplanes, and the problem at hand is formulated as a (possibly large scale) quadratic program.

Nonsmooth optimization comes into the play first when nonlinear separation surfaces are sought, thus we will survey polyhedral, spherical, ellipsoidal and conical separation models. In addition we will focus on some more complex classification problems, where nonsmooth objective functions are to be dealt with.

We will consider, in particular:

  • The semisupervised setting, where class membership is assumed to be known just for a subset of the training set;

  • Th multiple instance learning, where bags of samples instead of single ones are to be classified;

  • The feature selection process, where sparse classifiers are designed.

In all the above cases the optimization problems to be tackled are nonconvex and nonsmooth. Very often, however, they can be put in the DC (Difference of Convex) form. We will also provide the results of some numerical experiments.

 

EUROPT Fellow 2021 Lecture by Monique Laurent (Centrum Wiskunde & Informatica, and Tilburg University, Netherlands)

Copositivity, sums of squares of polynomials and graphs

We investigate a hierarchy of semidefinite bounds for the stability number $\alpha(G)$ of a graph $G$, based on its continuous copositive programming formulation:

$$\alpha(G)=\min\{t: t(I+A_G)-J \in\text{COP}_n\},$$

where $n=|V_G|$, $A_G$ is the adjacency matrix of $G$ and $J$ is the all-ones matrix. Here, $\text{COP}_n$ is the copositive cone, consisting of the symmetric $n\times n$ matrices $M$ for which the polynomial $p_M=\sum_{i,j=1}^n M_{ij}x_i^2x_j^2$ is nonnegative over $\mathbb{R}^n$. By replacing the copositive cone $\text{COP}_n$ by its inner conic approximations $K^{(r)}_n$, consisting of the matrices $M$ for which the polynomial $(\sum_i x_i^2)^rp_M$ is a sum of squares, we obtain the bounds $\vartheta^{(r)}(G)$, known to converge asymptotically to $\alpha(G)$. De Klerk and Pasechnik (2002) conjectured that finite convergence takes place at order $r=\alpha(G)-1$, i.e., $\vartheta^{(r)}(G)=\alpha(G)$ for $r\ge \alpha(G)-1$. However, even the weaker conjecture asking whether equality holds for some $r$ is still open. We discuss old and new results around these questions and the cones $K^{(r)}_n$, which display a nice interplay between graph structure, polynomial optimization and real algebraic geometry.

Based on joint work with Luis Felipe Vargas (CWI, Amsterdam).

Theme: Overlay by Kaira