Detailed Conference program

Wednesday

Wednesday, 8:30 – 9:00

WA-01: Opening
Stream: Opening
Room: Fermat

Wednesday, 9:00 – 10:00

WB-01: Plenary – Martine LABBE
Stream: Plenary
Room: Fermat
Chair(s): Sonia Cafieri

  • Linear bilevel optimization: overview and recent results
    Martine Labbé A bilevel optimization problem consists in an optimization problem in which some of the constraints specify that a subset of variables must be an optimal solution to another optimization problem. This paradigm is particularly appropriate to model competition between agents, a leader and a follower, acting sequentially. In this talk I will focus on the simplest bilevel problems, those that are linear. I will present the main characteristics, properties and algorithms for these problems. Then, I will discuss some recent results showing that these problems are already extremely challenging. This is a joint work with Thomas Kleinert, Martin Schmidt and Frank Plein.

Wednesday, 10:15 – 11:30

WC-01: Analysis of Non Linear Algorithms I
Stream: Analysis of Non Linear algorithms
Room: Fermat
Chair(s): Tobias Seidel

  • Duality and penalty function methods for a nonlinear programming problem
    Andrei Dan Halanay, Miruna Beldiman We introduce a multiobjective nonlinear programming problem defined on a product of sigma-algebras. First, using a partitioning scheme we propose a dual model for our problem and obtain duality results under different generalized univexity hypothesis. Then, considering a pseudometric corresponding to this context we study two resolution methods using an exponential penalty function and then a logarithmic one. We prove the convergence for these methods, closing with some examples.
  • New Results on Superlinear Convergence of Classical Quasi-Newton Methods
    Anton Rodomanov, Yurii Nesterov We present a new theoretical analysis of local superlinear convergence of classical quasi-Newton methods from the convex Broyden class. As a result, we obtain a significant improvement in the currently known estimates of the convergence rates for these methods. In particular, we show that the corresponding rate of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) method depends only on the product of the dimensionality of the problem and the logarithm of its condition number.
  • Calculating local and global minima with quADAPT
    Tobias Seidel, Karl-Heinz Kuefer A classical solution approach to semi-infinite optimization is based on discretizing the semi-infinite index set. To reduce the number of discretization points, they can be chosen adaptively. Recently a new adaptive discretization method (quADAPT) has been introduced, which guarantees a quadratic convergence of the iterates under mild regularity assumptions. In this talk we establish conditions under which a limit point is a local minimum. We show that no such result is possible in the case of global minima. Finally, we introduce modifications to the algorithm that make a statement possible.

WC-02: Advances in mathematical optimization for machine learning and data analysis – Part I
Stream: Advances in mathematical optimization for machine learning
Room: Turing
Chair(s): Le Thi Khanh Hien

  • An adaptive subsampled Hessian-free optimization method for statistical learning.
    Jeremy RIEUSSEC, Fabian Bastin, Jean Laprés-Chartrand, Loïc Shi-Garrier We consider nonconvex statistical learning problems and propose a variable sample-path method, where the sample size is dynamically updated to ensure a decrease in the true objective function with high probability. We integrate this strategy in a subsampled Hessian-free trust-region method with truncated conjugate gradient, relying on outer product approximations. The approach is compared to various adaptive sample approximation algorithms and stochastic approximation methods popular in machine learning. The efficiency of the approach is illustrated on various large size datasets.
  • An Inertial Newton Algorithm for Deep Learning
    Camille Castera, Jerome Bolte, Cédric Févotte, Edouard Pauwels We introduce a new second-order inertial optimization method for machine learning. It exploits the geometry of the loss function while only requiring stochastic approximations of the function values and the generalized gradients. The algorithm combines both gradient-descent and Newton-like behaviors as well as inertia. We prove the convergence of the algorithm for most deep learning problems using tame optimization and dynamical systems theory. Additionally we discuss and address the existence of spurious stationary points when using mini-batch methods for non-smooth problems.
  • An Inertial Block Majorization Minimization Framework for Nonsmooth Nonconvex Optimization
    Le Thi Khanh Hien, Duy Nhat Phan, Nicolas Gillis We introduce TITAN, an inertial block majorization minimization framework for nonsmooth nonconvex optimizatioN problems. TITAN is a block coordinate method that embeds inertial force to each majorization-minimization step of the block updates. The inertial force is obtained via an extrapolation operator that subsumes heavy-ball and Nesterov-type accelerations for block proximal gradient methods as special cases. We study sub-sequential convergence as well as global convergence for the generated sequence of TITAN. We illustrate the effectiveness of TITAN on the matrix completion problem.

WC-03: Derivative-Free Optimization I
Stream: Derivative-Free Optimization
Room: Nash
Chair(s): Massimo Roma

  • Regret Bounds for Noise-Free Bayesian Optimization
    Sattar Vakili Bayesian optimisation is a powerful method for non-convex black-box optimization in low data regimes. However, the question of establishing tight performance bounds for common algorithms in the noiseless setting remains a largely open question. We establish new and tightest bounds for two algorithms, namely GP-UCB and Thompson sampling, under the assumption that the objective function is smooth in terms of having a bounded norm in a Matérn RKHS. Furthermore, we do not assume perfect knowledge of the kernel of the Gaussian process emulator used within the Bayesian Optimization loop.
  • On some Bayesian optimization recipes applied to mono-multi-fidelity constrained black box problems
    Nathalie Bartoli, Thierry Lefebvre, Youssef Diouane, Sylvain Dubreuil, Joseph Morlier This work aims at developing new methodologies to optimise computational costly complex systems (e.g., aeronautical engineering systems). The proposed surrogate-based method (often called Bayesian Optimization) uses adaptive sampling to promote a trade-off between exploration and exploitation. Our in-house implementation, called SEGOMOE, handles a high number of design variables and nonlinearities by combining mixtures of experts (local surrogate models ) for the objective and/or the constraints. An extension to multi-fidelity is also included when a variety of information is available.
  • On the use of Derivative-Free Optimization in design Discrete Event Simulation models for Emergency Departments
    Massimo Roma, Alberto De Santis, Tommaso Giovannelli, Stefano Lucidi, Mauro Messedaglia The most widely used tool for studying patient flow through an Emergency Department (ED) is Discrete Event Simulation (DES). However, to achieve high reliability of such a DES model, an accurate calibration procedure is firstly required in order to determine a good estimate of the model input parameters. In this work we propose a Simulation-Optimization approach integrating DES models with a Derivative-Free Optimization method in order to determine the best values of model input parameters. The approach we propose, has been widely experimented on a large ED in Rome.

WC-04: Optimisation in Aerospace Engineering
Stream: Optimisation in Aerospace Engineering
Room: Lagrange
Chair(s): Edmondo Minisci

  • Minimum-fuel Low-thrust Earth-Moon Transfers in the Sun-Earth-Moon System
    Richard EPENOY In this work, a minimum-fuel optimal control problem will be stated subject to the dynamics of the Sun-Earth-Moon Bicircular Restricted Four-Body Problem. To overcome the huge sensitivity of the problem, a massive derivative-free exploration will be used to find the zeros of the associated shooting function. Different families of medium-duration transfers will be identified so as long-duration trajectories exploiting the counterparts of invariant manifolds defined in both Sun-Earth and Earth-Moon Circular Restricted Three-Body Problems leading to so-called low-energy transfers.
  • Analysis of Nature Inspired Methods for Interplanetary Trajectory Problems
    Edmondo Minisci, Gabriela Ochoa Many nature inspired optimisers have been used in the past decades to solve interplanetary trajectory problems. These problems are thought to have funnel or multi funnel objective function landscapes, but only few relevant works have tried to properly characterise the landscape and, more importantly, link the shape of the objective functions with the performance of the metaheuristics and heuristics. To contribute to this research area, a Local Optima Networks will be used to analyse, visualise and compare the behaviour of different types of solvers.
  • Interoperating direct and indirect solvers in optimal control
    Olivier Cots, Jean-Baptiste Caillau, Pierre Martinon Methods for solving optimal control problems fall into two main categories. Indirect methods based on Pontrjagin Principle are fast and accurate but require more work, in terms of mathematical analysis and a priori knowledge of structure of the solution, than direct transcription approaches that offer a good tradeoff between robustness and accuracy. For challenging problems, one may start with a direct method to find a rough solution, then refine it through an indirect method. We discuss this combined approach on applications and the interfacing between two solvers: Bocop and HamPath.

WC-05: Variational Methods in Vector and Set Optimization
Stream: Multiobjective Optimization
Room: Pontryagin
Chair(s): César Gutiérrez, Gabriele Eichfelder

  • Stability of set optimization problems
    Ruben Lopez, Elvira Hernández We study the stability of set optimization problems when the data (feasible sets and objective maps) admit variations. The data vary by means of a variational convergence notion for set-valued maps.
  • A Vectorization Scheme for Nonconvex Set Optimization Problems
    Stefan Rocktäschel, Gabriele Eichfelder, Ernest Quintana In this talk, we examine a solution approach for set-valued optimization problems with respect to the lower less relation. Thereby, we formulate a parametric vector optimization problem whose solution set approximates, in a specific sense, that of the set-valued optimization problem with arbitrary accuracy. We also investigate particular classes of set-valued mappings for which the corresponding set optimization problem is equivalent to the vector optimization problem. Surprisingly, set-valued mappings with a convex graph are one of the classes for which this equivalence holds.
  • Ekeland variational principles in vector equilibrium problems
    César Gutiérrez The talk addresses Ekeland variational principles for vector bifunctions defined on complete metric spaces. Several versions of that result are stated via a scalarization approach and new lower semicontinuity and lower boundedness assumptions. As a result, some recent Ekeland variational principles of the literature are improved since weaker hypotheses are assumed.

WC-06: Applications of Optimization I
Stream: Applications of Optimization
Room: Moreau
Chair(s): Michal Kocvara

  • Topological optimization of porous architectural materials using a multiphysics homogenization method
    Godfred AGYEKUM OHENEBA, Laurent Cangemi, François JOUVE We will present a topological optimization method based on a homogenized formulation of poroelastic type, of a fluid mass transfer problem within a material with periodically controlled microstructure.The optimization consists in minimizing an energy-type objective function, whose physical properties are correlated with a density of matter and with a microstructure at any point of the domain.Examples of optimized design results will be presented.
  • A Primal-Dual Augmented Lagrangian Algorithm for SDPs in truss topology design
    Arefeh Kavand In this talk, we focus on the augmented Lagrangian framework based on the PENNON algorithm. We suggest a primal-dual approach with a number of useful modifications, for solving large SDP problems with low-rank solutions. For this, we rephrase Newton systems in the PENNON algorithm by primal-dual systems which are then solved approximately by the preconditioned conjugate gradient method using newly designed preconditioners. The efficiency of the algorithm is demonstrated by numerical experiments for truss topology optimization problems with growing dimension.
  • An Interior-Point Method for Low-Rank Semidefinite Programming with Application to Truss Topology Design
    Soodeh Habibi, Michal Kocvara General algorithms and software for semidefinite optimization are unsuitable to solve very large dimensional problems. In this talk, we focus on SDP problems with low-rank solutions. Within a standard framework of an interior-point method, we solve the linear systems by a preconditioned conjugate gradient method. We present a new preconditioner tailored to the low-rank structure of the solution. We solve large to very large-scale SDP problems from structural optimization with either rank-one or approximate low-rank solutions. In both cases, our Matlab code outperforms available SDP software.

Wednesday, 11:45 – 13:00

WD-01: Analysis of Non Linear Algorithms II
Stream: Analysis of Non Linear algorithms
Room: Fermat
Chair(s): Vladimir Shikhman

  • An Interior Point-Proximal Method of Multipliers for Convex Programming
    Spyridon Pougkakiotis, Jacek Gondzio In this talk, we present an infeasible Interior Point Method (IPM) combined with the Proximal Method of Multipliers (PMM). The resulting algorithm (IP-PMM) is interpreted as a primal-dual regularized IPM, suitable for solving convex programming problems. Given this framework, we prove polynomial complexity of the algorithm for a wide class of problems, under standard assumptions. We derive a robust tuning for the penalty parameters as well as an ill-posedness detection mechanism. The method is implemented and its reliability is demonstrated through extensive experimentation.
  • Topological approach to mathematical programs with switching constraints
    Vladimir Shikhman We study mathematical programs with switching constraints (MPSC) from the topological perspective. Two basic theorems from Morse theory are proved. Outside the W-stationary point set, continuous deformation of lower level sets can be performed. However, when passing a W-stationary level, the topology of the lower level set changes via the attachment of a w-dimensional cell. The dimension w depends on both the number of negative eigenvalues of the restricted Lagrangian’s Hessian and the number of bi-active switching constraints.
  • Local convergence of tensor methods
    Nikita Doikov, Yurii Nesterov In this talk, we study local convergence of high-order Tensor Methods for solving convex optimization problems with composite objective. We justify local superlinear convergence under the assumption of uniform convexity of the smooth component, having Lipschitz-continuous high-order derivative. The convergence both in function value and in the norm of minimal subgradient is established. Also, we show how local convergence of the methods can be globalized using the inexact proximal iterations.

WD-02: Advances in mathematical optimization for machine learning and data analysis – Part II
Stream: Advances in mathematical optimization for machine learning
Room: Turing
Chair(s): Dimitri Papadimitriou

  • A Bregman Method for Structure Learning on Sparse Directed Acyclic Graphs
    Manon Romain We develop a Bregman proximal gradient method for structure learning on linear structural causal models. While the problem is non-convex, has high curvature and is NP-hard, Bregman gradient methods allow us to neutralize at least part of the impact of curvature by measuring smoothness against a highly nonlinear kernel. This allows the method to make longer steps and significantly improves convergence. Each iteration requires solving a Bregman proximal step which is convex and efficiently solvable for our particular choice of kernel. We test our method on various synthetic and real data.
  • An adaptive, accelerated algorithm for stochastic constrained convex optimization
    Ali Kavis, Kfir Levy, Francis Bach, Volkan Cevher We propose a novel adaptive, accelerated algorithm for stochastic constrained convex optimization. Our method, inspired by the Mirror-Prox method, simultaneously achieves the optimal rates for smooth/non-smooth problems with either deterministic/stochastic first-order oracles. This is done without any prior knowledge of the smoothness nor the noise properties of the problem. To the best of our knowledge, this is the first adaptive, unified algorithm that achieves optimal rates in the constrained setting. Through extensive numerical experiments, we demonstrate the performance of our framework.
  • Recent Advanced in the Theory of (Adaptive) Stochastic Gradient Methods
    Vivak Patel Stochastic Gradient Descent (SGD) and its adaptive variants are foundational algorithms in data science, where it is essential to understand the behavior of these methods. In this talk, we establish several novel results. For standard SGD, we demonstrate L2 and strong convergence for a broad class of nonconvex functions, subsuming most recent results. Moreover, we provide a unified consistency and asymptotic normality analysis for a broad class of convex functions and adaptive methods, improving on many recent efforts.

WD-03: Interfaces between Optimization and Game Theory
Stream: Variational inequalities, Nash games, game theory, multilevel and dynamic optimization
Room: Nash
Chair(s): Laura Rosa Maria Scrimali

  • Computing mixed strategies equilibria in presence of switching costs by the solution of nonconvex QP problems
    Veronica Piccialli, Giampaolo Liuzzi, Marco Locatelli, Stefan Rass In this work, we address a game theory problem arising in the context of network security. In traditional game theory problems, given a defender and an attacker, one searches for mixed strategies that minimize a linear payoff functional. In this paper an additional quadratic term is added to the minimization problem representing switching costs, i.e., the costs for the defender of switching from a given strategy to another one. The resulting problem is a nonconvex QP problem with linear constraints, that is NP-hard. We propose an effective branch and bound for solving it.
  • Decentralize and Randomize: Faster Algorithm for Wasserstein Barycenters
    Pavel Dvurechensky We study the decentralized distributed computation of discrete approximations for the regularized Wasserstein barycenter of a finite set of continuous probability measures distributedly stored over a network. We assume there is a network of agents/machines/computers, and each agent holds a private continuous probability measure and seeks to compute the barycenter of all the measures in the network by getting samples from its local measure and exchanging information with its neighbors. Motivated by this problem, we develop, and analyze, a novel accelerated primal-dual stochastic gradient method
  • Solving non-monotone equilibrium problems via a DIRECT-type approach
    Mauro Passacantando, Stefano Lucidi, Francesco Rinaldi A global optimization approach for solving non-monotone equilibrium problems (EPs) is proposed. The class of (regularized) gap functions is used to reformulate any EP as a constrained global optimization program and some bounds on the Lipschitz constant of such functions are provided. The proposed approach is a combination of an improved version of the DIRECT algorithm with local minimizations. Unlike most existing solution methods for EPs, no monotonicity-type condition is assumed in this paper. Preliminary numerical results on several classes of EPs show the effectiveness of the approach.

WD-04: Applications of Optimization II
Stream: Applications of Optimization
Room: Lagrange
Chair(s): Sebastien Bourguignon

  • Continuous optimization of Fourier sampling schemes for MRI
    Alban Gossard, Frédéric de Gournay, Pierre Weiss Fourier sampling is a critical issue in image processing. While Shannon and compressed sensing theories dominated the field for decades, a recent trend is to optimize the sampling scheme for a specific dataset. In this work, we’ll focus on methods that continuously optimize the sampling points with respect to a reconstruction algorithm and to a database. Most attempts in this direction report optimization issues. We show that this phenomenon cannot be avoided since the problem is intrinsically combinatorial and has a huge number of spurious minimizers. We propose ideas to leverage this issue.
  • GPU-Accelerated Plan Optimization for Intensity-Modulated Radiotherapy
    Juan José Moreno Riado, Janusz Miroforidis, Dmitry Podkopaev, Ernestas Filatovas, Ignacy Kaliszewski, Ester M Garzon Intensity-Modulated Radiotherapy (IMRT) is a technique for cancer treatment that allows precise control over the geometry and intensity profile of radiation beams. Since the dose deposited in the patient’s body by a given IMRT plan is modeled by a sparse matrix, most of the computation time during the plan optimization stage is spent in sparse matrix multiplications. Therefore, an adequate use of HPC techniques can drastically reduce planning time. In this work, we describe a GPU-accelerated gradient-based optimization technique capable of generating adequate plans in a limited time span.
  • Optimization of molecular descriptors using memetic algorithms
    Savíns Puertas Martín, Juana Lopez Redondo, Horacio Pérez-Sánchez, Pilar M. Ortigosa One of the aims of Drug Discovery is to find compounds similar to a reference. For this purpose, different methods are appearing, among which we highlight Virtual Screening (VS). Usually, the VS techniques are ad-hoc methods that optimize a particular descriptor used as a scoring function. In this work, we propose a generic memetic optimization algorithm able to deal with any descriptor. Results show that our proposal outperforms the solutions provided by the state-of-the-art algorithms, previously proposed in the literature.

WD-05: Polynomial Optimization I
Stream: Polynomial Optimization
Room: Pontryagin
Chair(s): Marek Tyburec

  • Sparse moment-sum-of-squares relaxations for nonlinear dynamical systems with guaranteed convergence
    Corbinian Schlosser, Milan Korda We present a sparse moment-sum-of-squares approach for sparse dynamical systems that can be applied to the problems: region of attraction, maximum positively invariant set and global attractor. We prove a decomposition of these sets provided that the vector field and constraint set posses certain structure. We combine these decompositions with existing methods that are based on infinite-dimensional linear programming. In case of sparse polynomial dynamics, we show that these methods admit a sparse sum-of-squares (SOS) approximation with guaranteed convergence.
  • Polynomial optimization applied to set computation
    Benoît Legat, Raphaël M. Jungers The search for a set satisfying given properties is commonly narrowed down to polytopes or ellipsoids. However, in high dimension, the size of the representation of feasible polytopes might not be practical and the restriction to ellipsoids might be too conservative. In this talk, we explore the computation of sets defined by polynomial functions. For convex sets, we discuss the choice to parametrize the search in terms of the gauge or support function of the set depending on the properties that the set should satisfy.
  • Exploiting constant trace property in large-scale polynomial optimization
    Ngoc Hoang Anh Mai, Jean Bernard Lasserre, Victor Magron We prove that every semidefinite moment relaxation of a polynomial optimization problem (POP) with a ball constraint can be reformulated as a semidefinite program involving a matrix with constant trace property (CTP). As a result such moment relaxations can be solved efficiently by first-order methods that exploit CTP, e.g., the conditional gradient-based augmented Lagrangian method. The efficiency and scalability of our framework are illustrated on second-order moment relaxations for various randomly generated QCQPs. This is joint work with Lasserre, Magron and Wang.

WD-06: Energetic and Environmental Applications of Optimization I
Stream: Energetic and Environmental Applications of Optimization
Room: Moreau
Chair(s): Juana Lopez Redondo

  • Decomposition of long-term investment optimization models for large-scale integration of wind power in Europe
    Caroline Granfeldt To capture the variability in electricity generation from variable renewable energy sources, mathematical modeling of future electricity systems must include a fine discretization of time. However, this leads to huge-scale optimization problems. We have developed a linear optimization model that maintains a high temporal resolution while capturing the most important aspects of the problem to minimize future costs of electricity production. I will discuss how a variable splitting-based Lagrangean relaxation and subgradient algorithm enables a parallel solution process for this model.
  • ETS, Emissions and the Energy-Mix Problem
    Paolo Falbo We study the impact of ETS on emissions and energy-mix through a bilevel model. At the upper level, a policymaker maximizes a welfare function deciding the number of allowances to distribute to the electricity producers. At the lower level, two large producers decide the long-term capacity expansion among three technologies: renewables, coal and gas. The uncertainty is modelled through Markov chain bootstrapping scenarios, made of coal and gas prices and electricity demand. The problem is solved considering a large set of Pareto efficient solution between the two electricity producers.
  • Optical characterization of heliostat facets through computational optimization
    N.C. Cruz, R. Monterreal, Juana Lopez Redondo, J. Fernández-Reche, R.E. Orts, Pilar M. Ortigosa Solar power tower plants use mirrors, known as heliostats, to collect solar radiation. They consist of reflective panels, called facets, whose building specifications must be checked. The Direct Method studies the features of the facet’s surface itself. The Indirect Method uses special equipment, such as ultra-sensitive cameras, to analyze the solar image reflected by the studied facet on the target. This work focuses on the latter approach and formalizes an optical characterization procedure through computational optimization to avoid the traditional trial-and-error strategy.

Wednesday, 14:30 – 15:30

WE-01: EUROPT Fellow 2020 Lecture
Stream: Plenary
Room: Fermat
Chair(s): Laura Palagi, Giancarlo Bigi

  • Nonsmooth Optimization for Classification Problems in Machine Learning
    Manlio Gaudioso Classification is a machine learning technique where the objective is to assign each sample of a given population to exactly one from among several possible classes. Samples are represented by numerical vectors in the space of a certain number of attributes (features) and a classifier is a mathematical tool that is designed starting from a dataset of samples with known class membership (the so called training set). Once the classifier has been built up, it is used to predict the class membership for newly incoming samples. Most of the times classification problems are binary, as the class considered are just two, and constructing the classifier requires solution of a numerical optimization problem to detect an “optimal” separation surface in the sample space. Effective classification methods (e.g. the classic Support Vector Machine one) are based on the use of separating hyperplanes, and the problem at hand is formulated as a (possibly large scale) quadratic program. Nonsmooth optimization comes into the play first when nonlinear separation surfaces are sought, thus we will survey polyhedral, spherical, ellipsoidal and conical separation models. In addition we will focus on some more complex classification problems, where nonsmooth objective functions are to be dealt with. We will consider, in particular: • The semisupervised setting, where class membership is assumed to be known just for a subset of the training set; • The multiple instance learning, where bags of samples instead of single ones are to be classified; • The feature selection process, where sparse classifiers are designed. In all the above cases the optimization problems to be tackled are nonconvex and nonsmooth. Very often, however, they can be put in the DC (Difference of Convex) form. We will also provide the results of some numerical experiments.

Wednesday, 15:45 – 17:30

WF-01: ADMM, block variants and proximality
Stream: Alternating Direction Method of Multipliers and its Applications
Room: Fermat
Chair(s): Jacek Gondzio

  • A block symmetric Gauss-Seidel decomposition theorem for convex quadratic programming and its applications in multi-block ADMM
    Kim-Chuan Toh For a multi-block convex composite quadratic programming (CCQP) with an additional nonsmooth term in the first block, we present a block symmetric Gauss-Seidel (sGS) decomposition theorem, which states that each cycle of the block sGS method is equivalent to solving the CCQP with an additional sGS proximal term constructed from the quadratic function. This theorem has played a key role in various recently developed inexact proximal ADMM for multi-block convex conic programming. We demonstrate how our sGS-based ADMM can be applied solve doubly nonnegative SDPs.
  • A purely numerical linear algebra view on ADMM
    Stefano Cipolla, Jacek Gondzio Because of its versatility and applicability in multiple fields, the n-block alternating direction method of multipliers (ADMM) for solving non-separable convex minimization problems, has recently attracted the attention of many researchers. Despite the fact the connections between ADMM and Gauss-Seidel are well known, to the best of our knowledge, an analysis from the purely numerical linear algebra point of view is lacking in the literature. Aim of this talk is to present a series of very recent results obtained on this topic which shed further light on basic issues as convergence
  • Primal-dual proximal methods with Bregman distances
    Xin Jiang, Lieven Vandenberghe We discuss Bregman distance extensions of the primal-dual three-operator (PD3O) and Condat-Vu proximal algorithms. When used with standard proximal operators these algorithms include several important methods, including ADMM, as special cases. Extensions to generalized Bregman distances are attractive if the complexity per iteration can be reduced by matching the Bregman distance to the structure in the problem. For practical implementation we introduce line search techniques in the proposed methods. The results will be illustrated with applications in semidefinite optimization.
  • Multi-Block ADMM – Managing Randomness and Data Privacy in Optimization Algorithm Design
    Yinyu Ye Randomization strategy has been widely used in designing optimization algorithms to achieve properties that deterministic algorithms cannot do, such as SGD, BCD, Reinforced Learning etc. In this talk, we show how randomized techniques help in greatly improving the efficiency of the multi-block alternating direction method with multipliers. On the other hand, we illustrate that too much randomness may be harmful to the algorithm convergence. Therefore, randomness needs to be carefully managed to take advantage of the strategy. In addition, we discuss how to implement ADMM while preserving data

WF-02: Beyond First-order Optimization Methods for Machine Learning – Part I
Stream: Advances in mathematical optimization for machine learning
Room: Turing
Chair(s): Fred Roosta, Albert Berahas

  • Sequential Quadratic Optimization for Nonlinear Equality Constrained Stochastic Optimization
    Albert Berahas, Frank E. Curtis Stochastic gradient and related methods for solving unconstrained stochastic optimization problems have been studied extensively in recent years. However, settings with general nonlinear constraints have received less attention, and many of the proposed methods resort to using penalty or Lagrangian methods, which are often not the most effective strategies. In this work, we propose and analyze stochastic optimization methods based on the sequential quadratic optimization methodology. We discuss advantages and disadvantages of our approaches. Collaborators: F. E. Curtis, D. Robinson & B. Zhou.
  • Stochastic Polyak Step-size for SGD: An Adaptive Learning Rate for Fast Convergence
    Nicolas Loizou We propose a stochastic variant of the classical Polyak step-size (Polyak, 1987) commonly used in the subgradient method. Although computing the Polyak step-size requires knowledge of the optimal function values, this information is readily available for typical modern machine learning applications. Consequently, the proposed stochastic Polyak step-size (SPS) is an attractive choice for setting the learning rate for stochastic gradient descent (SGD). We provide theoretical convergence guarantees for SGD with SPS in different settings, including strongly convex, convex and non-convex functions.
  • Systematic Second-order Methods for Training, Designing, and Deploying Neural Networks
    Amir Gholami Finding the right Neural Network model and training it for a new task requires considerable expertise and extensive computational resources. Moreover, the process often includes ad-hoc rules that do not generalize to different application domains. These issues have limited the applicability and usefulness of DNN models, especially for new learning tasks. This problem is becoming more acute, as datasets and models grow larger, which increases training time, making random/brute force search approaches quickly untenable. In large part, this situation is due to the first-order stochastic gradient
  • Distributed Learning of Deep Neural Networks using Independent Subnet Training
    Anastasios Kyrillidis We propose a new approach to distributed neural network learning, called independent subnet training (IST). In IST, a neural network is decomposed into a set of subnetworks of the same depth as the original network, each of which is trained locally, before the various subnets are exchanged and the process is repeated. IST training has many advantages over standard data parallel approaches. We show experimentally that IST results in training time that are much lower than data parallel approaches to distributed learning.

WF-03: Mixed Integer Non Linear Programming
Stream: Mixed Integer Non Linear Programming
Room: Nash
Chair(s): Sourour Elloumi

  • Branch-and-bound algorithm applied to sparse optimization problems: a study of some exploration strategies.
    Gwenaël Samain, Jordan Ninin, Sebastien Bourguignon Sparse optimization focuses on finding a solution to least-squares problems with few non-zero components. Applications include inverse problems in signal processing, subset selection, or portfolio optimization. Optimization problems can be formulated as mixed-integer programs. A dedicated branch-and-bound algorithm is able to exploit the specific structure of such problems and finds the global minimum much faster than generic MIP solvers. We will present some results about the fine-tuning process of this branch-and-bound algorithm, focusing mainly on the exploration strategy.
  • Continuous location in spaces with different lp-normed regions.
    Moisés Rodríguez-Madrena, Martine Labbé, Justo Puerto In this work we address the single-facility Weber location problem in a finite dimensional real space endowed with different lp-norms at different polyhedral regions. We propose an equivalent representation of the location problem as a mixed-integer second order cone mathematical programming formulation. Interior point algorithms embedded in a branch-and-bound search can be applied allowing to obtain approximate solutions with a high precision degree. Our approach is validated reporting some preliminary computational experiments.
  • Large-scale Global Mixed-Integer Nonlinear Problems Solver by a Bi-level Generalized-CGRASP Metaheuristic Method
    João Lauro Faco’, Ricardo Silva, Mauricio Resende Large-scale MINLP problems where the numbers n of continuous bounded variables and m constraints are large, and d discrete variables, n+d>m can be solved in 2 levels. Repeat: (1) Optimization: a reduced problem in (n-m) continuous variables and d discrete variables is solved by a Generalized-CGRASP method where the random search and local improvement phases use a continuous and a discrete set. (2) Feasibility: test the other continuous variables feasibility solving a system of m NL equations by CGRASP, keeping fixed the former variables. If any variable violates a bound, do a change-of-basis.
  • Tightest linear reformulations for polynomial optimization with binary variables
    Mathieu Verchère, Sourour Elloumi Polynomial optimization problems with binary variables are known to be strongly NP-hard. In this talk, we consider the reformulation of these problems into equivalent mixed integer linear problems. For a given polynomial with binary variables, many such linear reformulations may exist and we focus on the question of how to find a reformulation with a tight LP bound. For polynomials with degree up to four, we formulate this question as a mixed integer linear problem. We then provide insights on potential improvements, as well as a theoretical comparison with other existing methods.

WF-04: Constrained optimization methods and solvers in Julia
Stream: Constrained optimization methods and solvers in Julia
Room: Lagrange
Chair(s): Miguel F. Anjos

  • Krylov methods in interior-point algorithms: a computational survey
    Mathieu Tanneau, Alexis Montoison Following growing interest in the community, we present a computational survey on the use of Krylov methods for solving the linear systems that arise in interior-point algorithms. First, we review the various linear systems that can be formulated from the so-called Newton system, and establish a nomenclature of which Krylov methods are suited to which linear system formulation. Then, we review a number of generic preconditioners, both from mathematical and practical perspectives. Finally, we compare the numerical behavior of each possible approach through extensive computational experiments.
  • Hypatia.jl: A Generic Nonsymmetric Conic Optimization Solver in Julia
    Chris Coey, Lea Kapelevich, Juan Pablo Vielma Hypatia is an open-source optimization solver in Julia and is accessible through a native interface and through JuMP/MathOptInterface. Hypatia makes it easy to model and solve primal-dual conic problems involving general convex cones for which appropriate primal OR dual barrier functions are known. We introduce Hypatia’s interfaces and algorithms and demonstrate computational advantages of compact “natural” conic formulations over extended formulations that use only “classical” cones. We also describe some algorithmic advances that have helped to make Hypatia competitive.
  • Conditional gradient methods for large-scale constrained optimization
    Mathieu Besançon, Sebastian Pokutta, Alejandro Carderera Conditional gradient algorithms allow the integration of convex constraints in a first-order optimization method. We present a new Julia toolbox integrating several variants of the Conditional Gradient method and detail the design choices that enable large-scale and atypical applications. In particular, we will present the linear minimization oracle interface making the library extensible and allowing users to leverage closed-form solution of linear minimization subproblems when they are known.
  • DiffOpt.jl differentiating your favorite optimization problems
    Joaquim Dias Garcia, Mathieu Besançon, Benoît Legat, Akshay Sharma DiffOpt aims at differentiating optimization problems written in MathOptInterface (MOI). Because MOI is JuMP’s lower level interface with solvers, this will “just work” with JuMP problems. The current framework is based on existing techniques for differentiating the solution of optimization problems with respect to the input parameters. We will show the current state of the package that supports Quadratic Programs and Conic Programs. Moreover, we will highlight how other packages are used to keep the library generic and efficient.

WF-05: Multiobjective Optimization: Uncertainty and Nonconvexity
Stream: Multiobjective Optimization
Room: Pontryagin
Chair(s): Gabriele Eichfelder

  • A general branch-and-bound framework for global multiobjective optimization
    Oliver Stein We develop a general framework for branch-and-bound methods in multiobjective optimization. Our focus is on natural generalizations of notions and techniques from the single objective case. After a review of the main ideas of single-objective branch-and-bound we discuss the appropriate generalizations of upper and lower bounds, discarding tests, node selection and termination criteria to the multiobjective setting. As a novel tool for multiobjective branch-and-bound we introduce convergent enclosures for the set of nondominated points. Numerical results illustrate our approach.
  • Extending the Robust (Relative) Regret Approach to a Multicriteria Setting
    Patrick Groetzner, Ralf Werner Consider a multiobjective decision problem with uncertainty given as a set of scenarios. In the single critria case, a famous method is to compare the possible decisions under uncertainty against the optimal decision with the benefit of hindsight, i.e. to minimize the (possibly scaled) regret of not having chosen the optimal decision. In this talk, we extend the concept of regret to the multiobjective setting and introduce a proper definition of multivariate (relative) regret. In contrast to the existing approaches, we are not limited to finite uncertainty sets or interval uncertainty.
  • Solving uncertain multiobjective optimization problems via epigraphical reformulations
    Ernest Quintana, Gabriele Eichfelder In this talk, we consider a solution approach for set-based robust solutions to multiobjective optimization problems under uncertainty. Specifically, we derive a parametric family of (deterministic) semi-infinite multiobjective problems whose solution sets approximate, with desired accuracy, that of the original problem. The tractability of the semi-infinite constraints is also analyzed with tools of Fenchel duality. Our approach generalizes the standard epigraphical reformulation of robust scalar problems to the multiobjective setting.
  • A New Directional Derivative for Set Optimization
    Gabriele Eichfelder, Robert Baier, Tobias Gerlach Set-optimization has attracted increasing interest in the last years, as for instance uncertain multiobjective optimization problems lead to such problems with a set-valued objective function. Optimality conditions for these problems, for instance using directional derivatives, are still very limited. The key aspect for a useful directional derivative is the definition of the used set difference. We present a new set difference which avoids the use of a convex hull and which applies to arbitrary convex sets. It is based on the new concept of generalized Steiner sets.

WF-06: Conic Optimization and related topics
Stream: Conic Optimization and related topics
Room: Moreau
Chair(s): Paula Amaral

  • Min-max regret for Standard QPs and nonconvex fractional QPs
    Immanuel Bomze, Paula Amaral, Patrick Groetzner Under finitely many uncertain scenarios, min-max regret for (possibly nonconvex) Standard QPs can be reduced to a min-max QCQP. We will follow this route and embed it into the more general framework for nonconvex fractional quadratic optimization with linear denominators, under linear (and/or also quadratic constraints) where copositivity bounds play an essential role to reduce the optimality gap in global optimization procedures.
  • On Standard Quadratic Programs with Exact and Inexact Doubly Nonnegative Relaxations
    E. Alper Yildirim, Yakup Görkem Gökmen We consider the doubly nonnegative (DN) relaxation of standard quadratic programs. We present a full algebraic characterisation of the set of instances of standard quadratic programs that admit an exact DN relaxation. This characterisation yields an algorithmic recipe for constructing such an instance. In addition, we explicitly identify three families of instances for which the DN relaxation is exact. We also provide an algebraic characterisation of the set of instances for which the DN relaxation has a positive gap and show how to construct such an instance using this characterisation.
  • Copositive based bounds on the clique number
    Paula Amaral, Immanuel Bomze For a given a graph G, the maximum clique problem consists in determining the size of the maximum complete subgraph, γ(G) in G. It is well known that determining γ(G) is an NP-hard problem. This problem can be formulated as a copositive optimization problem. Copositivity detection is difficult; in particular, to decide if it is not copositive is NP-complete. In this talk we present an algorithm that is based on the decomposition of a matrix in the doubly non-negative cone. The algorithm gives upper and lower bounds on the clique number. We report computational results for a set of graphs.
  • Optimal parameter tuning for path-following methods in conic optimization
    Anastasiia Ivanova Classical schemes for short-step path-following methods in conic optimization alternate a Newton step towards a target point on the central path and an increase of the parameter of the target point. The quantitative analysis of the performance of these methods relies on a compariso of the Newton decrement before and after the Newton step. We propose a novel scheme which replaces the Newton decrement with respect to a target point by an analog of the decrement with respect to the central path as a whole. Here the direction parallel to the central path is treated differently from the directions

Thursday

Thursday, 9:00 – 10:40

TA-01: Non-smoothness, inexactness and applications
Stream: Alternating Direction Method of Multipliers and its Applications
Room: Fermat
Chair(s): Stefano Cipolla

  • Differentiating non-smooth (minimisation or) saddle point problems
    Antonin Chambolle In recent works with T.Pock, TU Graz, we adapted the “piggyback” method to compute the differential of a loss depending on the solution of a non-smooth saddle point problem. The idea is to estimate the adjoint state by an iterative method parallel to the computation of the solution, rather than inverting of a system depending on the solution. One advantage is that it may be done also for degenerate problems where the inversion is impossible. While working in practice, the method is justified for smooth problems only. We will discuss attempts to generalize it to less smooth situations.
  • Principled analysis of methods with inexact proximal computations
    Mathieu BARRÉ, Adrien Taylor Proximal operations are among the most common primitives appearing in optimization. In particular, they are crucial tools in many standard splitting methods. We show that worst-case guarantees for algorithms relying on inexact proximal operations can be systematically obtained through a generic procedure based on semidefinite programming. This methodology is primarily based on the approach from (Drori & Teboulle, 2014) and on convex interpolation results, and allows producing non-improvable worst-case analyzes. We illustrate it on various algorithms involving inexact proximal computations.
  • A Flexible Optimization Framework for Regularized Linearly Coupled Matrix-Tensor Factorizations based on the Alternating Direction Method of Multipliers
    Jeremy Cohen, Carla Schenker, Evrim Acar Ataman Coupled matrix and tensor factorizations (CMTF) are frequently used to jointly analyze data from multiple sources, a task also called data fusion. We propose a flexible algorithmic framework for coupled matrix and tensor factorizations which utilizes Alternating Optimization (AO) and the Alternating Direction Method of Multipliers (ADMM). The framework facilitates the use of a variety of constraints, regularizations, loss functions and couplings with linear transformations in a seamless way. We demonstrate its performance on some simulated and real datasets.
  • An ADMM-Newton-CNN Numerical Approach to a TV Model for Identifying Discontinuous Diffusion Coefficients in Elliptic Equations: Convex Case with Gradient Observations
    Xiaoming Yuan We consider a TV model for identifying the discontinuous diffusion coefficient in an elliptic equation with observation data of the gradient of the solution, and show that the ADMM can be used effectively. We also show that one of the ADMM subproblems can be well solved by the active-set Newton method along with the Schur complement reduction method, and the other one can be efficiently solved by the deep convolutional neural network (CNN). The resulting ADMM-Newton-CNN approach is demonstrated to be efficient even for higher-dimensional spaces with fine mesh discretization.

TA-02: Nonlinear Composite and Constrained Optimization – Part I
Stream: Advances in mathematical optimization for machine learning
Room: Turing
Chair(s): Cong Bang Vu, Dimitri Papadimitriou

  • Proximal alternating minimization for solving discrete Mumford-Shah and Amborsio-Tortorelli models
    Hoang Trieu Vy LE, Marion Foare, Nelly Pustelnik This work focuses on a class of imaging problems which can be reformulated as a non-linear composite problem. We derive two proximal alternating minimization schemes with convergence guarantees to estimate critical points. In particular, we place our attention on the discrete Mumford-Shah and the Amborsio-Tortorelli models. We compare their behaviors when the discrete counterpart of the 1D Hausdorff measure is modeled either by an l1-norm or AT penalization. The closed-form expressions of the involved proximity operators are provided.
  • Random extrapolation for primal-dual coordinate descent
    Ahmet Alacaoglu, Olivier Fercoq, Volkan Cevher We introduce a randomly extrapolated primal-dual coordinate descent method that adapts to sparsity of the data matrix and the favorable structures of the objective function. Our method updates only a subset of primal and dual variables with sparse data, and it uses large step sizes with dense data. We prove linear convergence under metric subregularity for strongly convex-strongly concave problems, and almost sure convergence of the sequence and optimal sublinear convergence rates for the primal-dual gap and objective values, in the general convex-concave case.
  • Primal-Dual Proximal Splitting Algorithms for Large-Scale Convex Optimization
    Laurent Condat, Adil Salim, Peter Richtarik We study algorithms to minimize a sum of convex functions, possibly composed with linear operators, using their individual gradient or proximity operator. Using two different constructions based on the splitting technique proposed by Davis and Yin in 2017, we recover existing algorithms and discover a new one, which we call PDDY. Different variants, distributed, decentralized, or with stochastic gradient, will be discussed. Moreover, we derive nonergodic sublinear or linear convergence rates, as well as new accelerated versions in presence of strong convexity, using varying stepsizes.
  • On a Primal-Dual Newton Proximal Method for Convex Quadratic Programs
    Alberto De Marchi We introduce QPDO, a primal-dual method for convex quadratic programs which builds upon the proximal point algorithm and a damped semismooth Newton’s method. The outer proximal regularization yields a numerically stable method, and we interpret the proximal operator as the unconstrained minimization of a suitable merit function. The inner minimization relies on linear systems that are always solvable and exact linesearch. We report on numerical results against state-of-the-art solvers. QPDO proves to be a simple, robust, and efficient numerical method for convex quadratic programming.

TA-03: Game theory, multilevel and dynamic optimization I
Stream: Variational inequalities, Nash games, game theory, multilevel and dynamic optimization
Room: Nash
Chair(s): Mathias Staudigl

  • Consistent Multiscale Control of Stackelberg Games
    Anna Thünen We present a Stackelberg game with a large number of followers and we also derive the mean field limit of infinitely many followers. The relation between optimization and mean-field limit is studied and conditions for consistency are established. Finally, we propose a numerical method based on the derived models and present numerical results.
  • Approximation and exact penalization in hierarchical programming
    Giancarlo Bigi, Lorenzo Lampariello, Simone Sagratella Hierarchical programs are optimization problems whose feasible set is the solution set of another problem. The talk focuses on lower-level problems that are non-parametric wrt upper level variables: minimization over the solution set of a variational inequality is considered, which is a peculiar semi-infinite program and encompasses simple bilevel and equilibrium selection problems. To tackle it, a suitable approximated version is introduced. This does not perturb the original program too much while allowing the exploitation of exact penalty approaches whose convergence properties are shown.
  • Generalized Nash games with discontinuous cost functions: an new approach
    Didier Aussel, David Salas, Kien Cao Van A generalized Nash equilibrium problem (GNEP) corresponds to a non cooperative interaction between a finite set of players in which the cost function and the feasible set of each player depend on the decisions of the other players. The classical existence result for generalized equilibria due to Arrow-Debreu requires continuity of the cost functions. In this work, we provide an existence of solutions transferring this hypothesis to a “continuity-like” condition over the sublevel sets of the aforementioned functions. Comparison with Reny’s approach for discontinuous games is also considered.
  • Qualitative stability for solution map of parameterized Nash equilibrium problem and application
    Thanh Cong Lai Nguyen, Didier Aussel It is well known that the Nash Equilibrium Problem is a multi-player model of non-cooperative games. It is of interest to know whether the solution set of its parameterized form is stable or not. Here stability is understood in the sense of the semicontinuity of the solution set-valued map. We explore a number of different approaches based on either product-type assumptions or componentwise hypotheses. The qualitative stability is investigated under some properties such as semicontinuity, quasiconvexity of the cost functions of the players. Then application to single-leader-multi-follower

TA-04: Optimization under Uncertainty and Applications I
Stream: Optimization under Uncertainty and Applications
Room: Lagrange
Chair(s): Eligius M.T. Hendrix

  • Kernel Distributionally Robust Optimization
    Jia-Jie Zhu We propose kernel distributionally robust optimization (Kernel DRO) using insights from functional analysis. Our method uses reproducing kernel Hilbert spaces to construct ambiguity sets, which can be generalized to integral probability metrics and moment bounds. This perspective unifies existing robust and stochastic optimization methods. We prove a theorem that generalizes the classical duality in the mathematical problem of moments. Using universal RKHSs, our method applies to a broad class of loss functions, lifting common limitations such as the knowledge of the Lipschitz constant.
  • Optimal residential Microgrid operation considering Vehicle-to-Home (V2H) and Vehicle-to-Grid (V2G) energy services
    Energy Management System (EMS) plays a key role in operating Microgrids (MG). We consider a residential grid-connected MG equipped with PV system and PEV battery. Based on Stochastic Optimization and Machine Learning approaches, we investigate different scenarios of the MG optimization considering various uncertainties and constraints. The developed Home EMS (HEMS) optimizes the scheduling and the power dispatch of the MG considering V2G and V2H services and allows to identify household’s potential energy cost savings and the effectiveness of V2G and V2H strategies in operating the MG.
  • Convex Optimization using Tunable First-Order Inexact Oracles
    Guillaume Van Dessel, François Glineur Using black-box first-order methods with an approximate gradient, i.e. an inexact oracle, may significantly impact their convergence. In this work, we consider a tunable inexact oracle, which can provide first-order information with increasing levels of exactness, albeit with an increasing computational cost, which happens e.g. where the objective function is the optimal value of an inner subproblem. Given a target final accuracy, we show how to select the inexactness level at each step in order to minimize total computational budget, and present several situations where this is beneficial.

Thursday, 11:00 – 12:40

TB-01: Optimization for Air Transportation
Stream: Optimization for Air Transportation
Room: Fermat
Chair(s): Andrija VIDOSAVLJEVIC

  • Optimization for safe urban air mobility
    Mercedes Pelegrín, Claudia D’Ambrosio, Rémi Delmas, Youssef Hamadi Urban Air Mobility (UAM) will exploit the third dimension to smooth ground traffic in densely populated areas. On-line automated air traffic management will be key to ensure safety and optimize airspace capacity. This work addresses the problem of Tactical Deconfliction in UAM, considering different sources of unexpected traffic disruptions. A mathematical programming model based on envisioned UAM corridors is proposed. Extensive computational experience will allow us to draw conclusions on the benefits and limitations of our approach.
  • Trajectory optimization for the computation of safe emergency trajectories
    Maëva ONGALE-OBEYI, Damien Goubinat, Pierre-loic Garoche, daniel delahaye Nowadays, during an unexpected event in flight, the crew has to compare all the alternatives and plan a trajectory, without any automatized decision support. This decision has to be executed quickly in a stressful environment. Thus, the creation of an emergency landing planner seems crucial. We want to create a planner capable of computing a trajectory that is : (a) safe, (b) flyable and (c) in accordance with the resources. For this purpose, we decided to combine three algorithms, providing each one of these features: a path-planner, a path smoothing system and a continuous trajectory system.
  • New formulation for aircraft conflict avoidance problem using Sequential convex MINLP technique
    Renan Spencer Trindade, Claudia D’Ambrosio, Antonio Frangioni, Claudio Gentile Our work presents a new formulation for the aircraft conflict avoidance problem in two-dimensional space, where only the change in flight direction angle is allowed. The goal is to find the minimal variation of the original aircraft route angle while respecting the aircraft’s minimum safety distance along the trip. We propose a formulation defined as the sum of non-convex univariate functions of the original problem. Sequential Convex Mixed Integer Non-Linear Programming is applied to the problem, defining a convex relaxation of the original problem.
  • Two-stage stochastic programming for the extended aircraft arrival management problem with multiple pre-scheduling points
    Ahmed Khassiba, Fabian Bastin, Sonia Cafieri, Bernard Gendron, Marcel Mongeau We consider the two-stage stochastic optimization problem where a set of aircraft, heading to a given airport, are to be pre-scheduled on a reference point in the near-to-airport area in the first stage, and to be scheduled on the landing runway in the second stage. Actual arrival times on the pre-scheduling point are assumed to deviate randomly from target arrival times. In this work, we extend a previously proposed model to the case of multiple pre-scheduling points. Preliminary results are obtained on realistic instances from Paris-Charles-de-Gaulle airport.

TB-02: Nonlinear Composite and Constrained Optimization – Part II
Stream: Advances in mathematical optimization for machine learning
Room: Turing
Chair(s): Cong Bang Vu, Dimitri Papadimitriou

  • A primal dual method for optimization with nonlinear constraints
    Mehmet Fatih Sahin We propose a practical inexact augmented Lagrangian method (iALM) for nonconvex problems with nonlinear constraints. We characterize the total computational complexity of our method subject to a verifiable geometric condition, which is closely related to the Polyak-Lojasiewicz and Mangasarian-Fromowitz conditions. We also provide numerical evidence on machine learning problems, including the Burer-Monteiro factorization of semidefinite programs.
  • Safe Screening for the Generalized Conditional Gradient Method
    Yifan Sun We explore the sparsity acquiring properties of a generalized conditional gradient method, where the constraint is replaced by a gauge penalty function. Without assuming bounded iterates, we show O(1/t) convergence of the function values and duality gap. We couple this with a safe screening rule, and show that at a rate O(1/(td^2)), the screened support matches the support at the solution, where d > 0 measures problem degeneracy. Numerical experiments support our theoretical findings.
  • An Optimal Algorithm for Strongly Convex Minimization under Affine Constraints
    Adil Salim, Laurent Condat, Dmitry Kovalev, Peter Richtarik Optimization problems under affine constraints appear in various areas of machine learning. We consider the task of minimizing a smooth strongly convex function F under the affine constraint L x = b, with an oracle providing evaluations of the gradient of F and multiplications by L and its transpose. We provide lower bounds on the number of gradient computations and matrix multiplications to achieve a given accuracy. Then we propose an accelerated primal-dual algorithm achieving these lower bounds. Our algorithm is the first optimal algorithm for this class of problems.
  • Larger stepsizes for some primal-dual algorithms
    Ming Yan, Zhi Li Many primal-dual algorithms are developed to solve the optimization problem with a linear composition, and there was an common upper bound for the product of the primal and dual stepsizes. In this talk, I will present a large upper bound of the product for some primal-dual algorithms. In addition, we provide examples to show that the upper bound is tight. Then we apply this upper bound to show the convergence of several decentralized algorithms under weaker conditions.

TB-03: Solution Techniques for Variational Inequalities
Stream: Variational inequalities, Nash games, game theory, multilevel and dynamic optimization
Room: Nash
Chair(s): Patrizia Daniele

  • Pseudo-monotone variational inequalities: Dynamics and numerical schemes of Tseng type
    Robert Csetnek We associate to a pseudo-monotone variational inequality a Tseng-type dynamical system and carry out an asymptotic analysis for the generated trajectories. The explicit time discretization of this system results into Tseng’s forward-backward-forward algorithm with relaxation parameters, which we prove to convergence also when it is applied to the solving of pseudo-monotone variational inequalities. In addition, we show that linear convergence is guaranteed under strong pseudo-monotonicity. We close with numerical experiments which justify the theory presented.
  • Variance Reduction schemes for stochastic Variational inequalities
    Mathias Staudigl We develop a new stochastic algorithm with variance reduction for solving pseudo-monotone stochastic variational inequalities. Our method builds on Tseng’s forward-backward-forward algorithm, which is known in the deterministic literature to be a valuable alternative to Korpelevich’s extragradient method when solving variational inequalities over a convex and closed set governed by pseudo-monotone, Lipschitz continuous operators.
  • A second order dynamical system and its discretization for strongly pseudo-monotone variational inequalities
    Vuong Phan We consider a second order dynamical system for solving variational inequalities in Hilbert spaces. Under standard conditions, we prove the existence and uniqueness of strong global solution of the proposed dynamical system. The exponential convergence of trajectories is established under strong pseudo-monotonicity and Lipschitz continuity assumptions. A discrete version of the proposed dynamical system leads to a relaxed inertial projection algorithm whose the linear convergence is proved. We discuss the possibility of extension to general monotone inclusions.
  • Stochastic generalized Nash equilibrium seeking and Variational Inequalities
    Barbara Franci, Sergio Grammatico We consider the stochastic generalized Nash equilibrium problem (SGNEP) in merely monotone games with expected-value cost functions and shared constraints. Specifically, we present a distributed SGNE seeking algorithm with a single proximal computation (e.g. projection) and one single evaluation of the pseudogradient mapping. Our scheme is inspired by the relaxed forward-backward algorithm for variational inequalities by Malitsky (Mathematical programming, 2019) and convergence is proven under monotonicity of the pseudogradient, approximated with the average over a number of random samples.

TB-04: Energetic and Environmental Applications of Optimization II
Stream: Energetic and Environmental Applications of Optimization
Room: Lagrange
Chair(s): Catherine Choquet

  • Mixed-Integer Nonlinear Optimization for District Heating Network Expansion
    Marius Roland, Martin Schmidt We present a MINLP for the expansion of district heating networks given a number of candidate consumers. Expansion decisions are made using the stationary state solution for the district heating system taking into account the pressure and thermal losses of the water pipes. We propose an approximation of the thermal energy equation and additional valid inequalities are derived to help global solvers solve the problem. Our numerical results show a trade-off between connecting distant candidate consumers, thus increasing thermal and pressure losses, and the estimated average consumer payment.
  • Optimal positioning of groundwater heat pumps using the mesh adaptive direct search algorithm
    Smajil Halilovic, Fabian Böttcher, Thomas Hamacher Groundwater heat pumps (GWHP) are one of the main technologies for heating and cooling. Until today groundwater is not utilized systematically. A way to maximize the potential of the resource is to position GWHPs optimally. This problem is a PDE constrained optimization problem. Due to expensive gradient estimations, we take a derivative-free optimization approach using the mesh adaptive direct search algorithm (MADS). To reduce the computational load, several dynamic and static surrogates are used in numerical benchmarks with various numbers of heat pumps and applied to real case scenarios.
  • Response Surface Optimization of Continuous Packed Bed Adsorption Processes for Textile Wastewater Treatment
    Younes Abrouki, Abdelkader Anouzla, Hayat Loukili In this study, we investigated the response surface optimization of continuous packed bed adsorption processes for textile wastewater treatment using low-cost adsorbent developed from bio-waste. Under the optimal conditions exposed by central composite design, it is possible to eliminate almost the majority of the pollutants present in textile wastewater. It was found that this process using continuous fixed-bed adsorption column filled with biowaste-derived adsorbent was efficient at removing of Pollutants from textile Wastewater.
  • Numerical validation of a game theory approach for the optimal control problem of groundwater pollution
    Moussa Mory DIEDHIOU, Catherine Choquet We consider a spatial differential game modeling the non-cooperation of two polluters for the optimal control of groundwater quality while maximizing the profit of their polluting activity. Spatio-temporal objectives are constrained by a hydrogeological model for the spread of the pollution in the aquifer which consists in a system of a nonlinear parabolic partial differential equation for the pollutant concentration coupled with an elliptic equation for the velocity of the flow. The problem under consideration belongs to the class of infinite dimensional multiobjective control problems.

TB-06: Global Optimization
Stream: Global Optimization
Room: Moreau
Chair(s): Leocadio G. Casado

  • Exact and heuristic algorithms for solving a MINLP sigle facility location and design problem for firm expansion
    A firm has a given budget available to invest in a given geographical area where it already runs some other facilities. Competing facilities already exist in the market. In order to increase its profit, the firm can (1) locate a new facility, (2) change the quality of the existing facilities up or down or even close some of them (3) do both things. An MINLP formulation for this problem is introduced, and both an interval B&B algorithm and a heuristic procedure are proposed to cope with it. According to the results, small variations in the available budget may produce very different results.
  • Nested branch-and-bound algorithm for minmax problem and constraints with quantifiers
    Jordan Ninin, Dominique Monnet, Benoit Clement We consider global optimization problems of a minmax function subject to constraints with quantifiers. The goal is to minimize over x the maximum over y of f(x,y), and constraints with quantifiers must be satisfied for a set of value (for all y in S). We present an algorithm to address this non-smooth objective function and the constraints with quantifiers in the same way. Our approach is a set-valued method based on two nested branch-and-bound algorithm, using interval analysis. This problem arises in structured robust control, semi-infinite optimization or minmax optimization.
  • Towards accelerating global parameter estimation with reduced data sets
    Susanne Sass, Angelos Tsoukalas, Ian H. Bell, Dominik Bongartz, Jaromil Najman, Alexander Mitsos Many large parameter estimation problems are intractable for deterministic global optimization solvers. With a Branch and Bound algorithm in mind, we therefore investigate whether valid lower bounds can be constructed based on reduced data sets. To that end, we take the equation of state for propane as a case study. Our results indicate that reduced models can successfully identify regions where the model based on the full data set yields high and low objective values. However, whether and to what extent there is a time gain due to data reduction depends on the actual data sample chosen.
  • On simplex minds and monotonicity
    Eligius M.T. Hendrix, Leocadio G. Casado, Frederic Messine, Boglárka G.-Tóth Simplicial branch and bound type of methods could make use of gradient bounds in order to create novel bounds over a simplicial partition set. Our question is how to use ideas of Interval Arithmetic to generate novel bounds. Our second question is dealing with the concept of monotonicity and exploitation in a simplicial branch and bound framework. We will report on our findings thus far. This investigation has been supported by The Spanish Ministry (RTI2018-437 095993-B-100), in part financed by the European Regional Development Fund (ERDF).

Thursday, 14:00 – 15:00

TC-01: EUROPT Fellow 2021 Lecture
Stream: Plenary
Room: Fermat
Chair(s): Miguel F. Anjos, Giancarlo Bigi

  • Sums of squares of polynomials and graphs
    Monique Laurent We investigate a hierarchy of semidefinite bounds for the stability number alpha(G) of a graph G, that are based on its continuous copositive programming formulation: alpha(G) = min {t: t(I+A_G)-J \in COP(n)}. Here, n is the number of vertices, A_G is the adjacency matrix, J is the all-ones matrix, and COP(n) is the copositive cone, which consists of the symmetric matrices M for which the polynomial p_M = sum_{i,j=1}^n M_{ij}x_i^2 x_j^2 is nonnegative over R^n. By replacing the copositive cone COP(n) by its inner conic approximations K^r, consisting of the matrices M for which the polynomial |x|^{2r}p_M is a sum of squares, we obtain the bounds theta^{r}(G), known to converge asymptotically to alpha(G). De Klerk and Pasechnik (2002) conjectured that finite convergence takes place at order r=alpha(G)-1. However, even the weaker conjecture asking whether equality holds for some r is still open. We discuss old and new results around these questions and the cones K^r, which display a nice interplay between graph structure, polynomial optimization and real algebraic geometry. Based on joint work with Luis Felipe Vargas (CWI, Amsterdam).

Thursday, 15:15 – 16:30

TD-01: Sparse and Large-Scale Optimization
Stream: Sparse and Large-Scale Optimization
Room: Fermat
Chair(s): Emmanuel Soubies

  • On nondegenerate M-stationary points for mathematical programs with sparsity constraint
    Sebastian Lämmel We study mathematical programs with sparsity constraint (MPSC) from a topological point of view. Special focus will be on M-stationary points from Burdakov et al. (2016). We introduce nondegenerate M-stationary points, define their M-index, and show that all M-stationary points are generically nondegenerate. In particular, the sparsity constraint is active at all local minimizers of a generic MPSC. Moreover, we discuss the issues of instability and degeneracy of points due to different stationarity concepts.
  • AAR-Based Decomposition Method For Limit Analysis
    Nima Rabiei, Ali Almisreb A method is suggested for decomposing a class of non-linear convex programmes which are encountered in limit analysis. These problems have second-order conic constraints and a single complicating variable in the objective function. The method is based on finding the distance between the feasible sets of the decomposed problems, and updating the global optimal value according to the value of this distance. The latter is found by exploiting the method of Averaged Alternating Reflections which is here adapted to the optimization problem at hand.
  • Expanding Boundaries of Gap Safe Screening
    Cassio Dantas, Emmanuel Soubies, Cédric Févotte Safe screening techniques allow the early elimination of zero coordinates in the solution of sparse optimization problems. In this work, we extend the existing Gap Safe screening framework by relaxing the global strong-concavity assumption on the dual cost function. Local regularity properties are considered instead. Besides extending safe screening to a broader class of functions that includes beta-divergences (e.g., the Kullback-Leibler divergence), the proposed approach also improves upon the existing Gap Safe screening rules on previously applicable cases (e.g., logistic regression).

TD-02: Advances in Douglas-Rachford method – Part I
Stream: Advances in mathematical optimization for machine learning
Room: Turing
Chair(s): Cong Bang Vu, Dimitri Papadimitriou

  • Split-Douglas-Rachford for composite monotone inclusions and Split-ADMM
    Luis Briceño-Arias, Fernando Roldán We provide a generalization of the Douglas-Rachford splitting (DRS) for solving monotone inclusions in a real Hilbert space involving a general linear operator. The proposed method activates the linear operator separately from the monotone operators appearing in the inclusion and, in the simplest case when the linear operator is the identity, it reduces to classical DRS. Moreover, the weak convergence of primal-dual sequences to a point in the extended solution set is guaranteed, generalizing Svaiter (2011). As in Gabay (1983), we derive a new Split-ADMM in the convex optimization setting.
  • Anderson Accelerated Douglas-Rachford Splitting
    Anqi Fu, Junzi Zhang, Stephen Boyd We consider the problem of nonsmooth convex optimization with linear equality constraints, where the objective function is only accessible through its proximal operator. To solve it, we propose an Anderson accelerated Douglas-Rachford splitting (A2DR) algorithm, which we show either globally converges or provides a certificate of infeasibility/unboundedness. Applied to a block separable objective, A2DR partially decouples so its steps may be carried out in parallel, yielding an algorithm that is highly scalable. We describe an open-source implementation along with a variety of applications.
  • Douglas-Rachford splitting and ADMM for nonconvex optimization: Accelerated and Newton-type algorithms
    Lorenzo Stella, Andreas Themelis, Panagiotis Patrinos We propose a new line-search extension to Douglas-Rachford splitting (DRS) and ADMM, that accelerates them with quasi-Newton directions. The algorithms are suited for nonconvex problems, require the same black-box oracle as the original methods, and maintain their (subsequential) convergence. Experiments show that using L-BFGS and Anderson acceleration directions greatly improves convergence over vanilla DRS and ADMM, making them more robust to ill-conditioning. Under regularity and nondegeneracy assumptions at the limit point, superlinear convergence is shown when using Broyden directions.

TD-03: Advances in Variational Inequalities and Equilibrium Problems
Stream: Variational inequalities, Nash games, game theory, multilevel and dynamic optimization
Room: Nash
Chair(s): Robert Csetnek

  • Optimal emergency evacuation with uncertainties
    Laura Rosa Maria Scrimali, Georgia Fargetta Emergency management after crises or natural disasters is a very important issue shared by many countries. In particular, evacuation planning is a complex and challenging process that takes into account several elements. For this reason, optimization tools are fundamental to make policy makers predict or evaluate different disaster scenarios. In this talk, we present an emergency evacuation model where a population has to be evacuated from crisis areas to shelters and propose an optimization formulation for minimizing a combination of the transportation cost and the transportation time. In add
  • A stochastic variational approach to study economic equilibrium problem under uncertainty
    Domenico Scopelliti, Monica Milasi This talk deals with an economic equilibrium problem under time and uncertainty. We suppose that the market evolves in a finite sequence of time periods; at final date different states of the world can occur. We first reformulate the equilibrium by means of a stochastic quasi-variational inequality. In view of this characterization we can give some qualitative and quantitative properties of equilibrium solutions. Our approach can fit in several decision-making problems where the characteristcs elements are affected by uncertainty.
  • A variational approach for international human migration networks with and without regulations
    Patrizia Daniele, Anna Nagurney International human migration has transformed economies and societies. The new millennium, with climate change and its impacts, and increasing conflicts and displacements, has experienced a great increase in international migrants, with associated challenges faced by governments. We propose the modeling, analysis, and solution of international human migration problems by developing a network model with and without regulations. The formalism uses the theory of variational inequalities, coupled with Lagrange analysis.

TD-04: Polynomial Optimization II
Stream: Polynomial Optimization
Room: Lagrange
Chair(s): Corbinian Schlosser

  • Topology optimization of discrete structures by polynomial optimization
    Marek Tyburec, Jan Zeman, Martin Kružík, Didier Henrion Although finding continuous cross-sections of minimum-compliance frame structures is a fundamental problem of structural design, only local optimization techniques are adopted. Here, we develop a semidefinite programming formulation with a non-convex polynomial matrix inequality constraint and solve it by the moment-sum-of-squares hierarchy. Besides lower bounds from the hierarchy, we recover a feasible solution in each relaxation, building a sequence of upper bounds. The gap between the bounds quantifies the solution quality and establishes a simple sufficient condition for global optimality.
  • The approximation of measures by positive moment sequences
    Lorenzo Baldi, Bernard Mourrain To solve Polynomial Opt. problems Lasserre introduced the Moment Matrix hierarchy, approximating measures with a convex cone of positive linear functionals, and proved the convergence of optimal value to the minimum in the Archimedean case. We analyse the convex cone of truncated positive moment sequences and show that truncated measures with low rank moment matrix are extremal rays. We show the convergence in Hausdorff distance of a convex section of the outer approximation to probability measures, and describe the order of convergence using an effective version of Putinar Positivstellensatz.
  • RAPOSa: a polynomial optimization solver
    Brais González Rodríguez, Julio González-Díaz, Ángel Manuel González Rueda, Beatriz Pateiro-López, Diego Rodriguez Martinez, David R Penas In this talk we will present RAPOSa, a new solver for polynomial optimization problems based on the reformulation linearization techniques (originally introduced by Sherali and Tuncbilek in 1991). The current implementation incorporates most of the features discussed in past literature and other additional enhancements. The present version of the algorithm has proven to be very efficient, and comparisons with other global optimization solvers will be presented.

TD-05: Optimal Control and Optimization in Economics, Finance and Management I
Stream: Optimal Control and Optimization in Economics, Finance and Management
Room: Pontryagin
Chair(s): Ioannis Baltas

  • Stochastic differential games for optimal investment problems in a Markov regime-switching jump-diffusion market
    Gerhard-Wilhelm Weber, Emel Savku We apply dynamic programming principle to discuss two optimal investment problems by using zero-sum and nonzero-sum stochastic game approaches in a continuous-time Markov regime-switching environment. The first application is a zero-sum game between an investor and the market, and the second one formulates a nonzero-sum stochastic differential portfolio game as the sensitivity of two investors’ terminal gains. We derive regime-switching Hamilton-Jacobi-Bellman-Isaacs equations and obtain explicit optimal portfolio strategies with Feynman-Kac representations of value functions.
  • Random oligopolistic market optimal equilibrium control problem
    Annamaria Barbagallo The aim of the talk is to analyze the policymaker’s point of view of the random oligopolistic market equilibrium problem and present a stochastic optimal control equilibrium problem. More precisely, we study a model in which control policies may be imposed to regulate the amounts of exportation in random way. Control policies are implemented by imposing higher taxes or subsidies in order to restrict or encourage the exportation. We prove that the system that controls the commodity exportations in random way is expressed by a stochastic inverse variational inequality.
  • Optimal management of Defined Contribution pension funds during the distribution phase under the effect of inflation, mortality and uncertainty
    Ioannis Baltas, Athanasios Yannacopoulos, Gerhard-Wilhelm Weber We study the problem of optimal management of defined contribution (DC) pension funds, during the distribution phase, within a model uncertainty framework by taking into account the effect of (i) inflation and (ii) mortality. By employing robust control and dynamic programming techniques, we provide closed form solutions for the case of the exponential utility function. Moreover, we provide a detailed study of the limiting behavior of the associated stochastic differential game. Finally, we present a novel numerical approach that elucidates the effect of robustness and inflation.

TD-06: Applications of Optimization III
Stream: Applications of Optimization
Room: Moreau
Chair(s): Jordan Ninin

  • Network models for the human migration phenomenon
    Giorgia Cappello Large-scale migration flows are posing immense challenges for governments around the globe. In this work we introduce and analyze different multiclass human migration network models under both user-optimizing and system-optimizing approaches, using tools of VI and Game theory. Four scenarios are proposed: in one model we introduce policy intervention, in the form of subsidies; in another model the locations associated with migration are subject to capacities; in another one we introduce regulations inspired by the Covid-19 pandemic. Some numerical examples are provided.
  • An approach on efficiency measure for a fully fuzzy DEA
    Manuel Arana-Jiménez In this work, we deal with a model of Data Envelopment Analysis (DEA) poblem whose inputs and outputs data can be given by means of fuzzy numbers, what derives a fully fuzzy DEA. We propose a new radial, input-oriented approach, based on arithmetic operations with fuzzy numbers, as well as LU-fuzzy partial order. We propose a method with two phases to assess the relative efficiency of each Decision Making Unit (DMU), and classify each DMU. In each phase a fully fuzzy linear programming is formulated, which is transformed into a related multiobjective optimization problema.
  • Toolpath optimization for freeform surfaces machining
    mahfoud herraz, Marcel Mongeau, Mohammed Sbihi A two-step optimization procedure, specifically dedicated to toolpath planning for milling of free-form surfaces is presented. A full optimization model is built relying on state-of-the-art black-box optimizations methods. Because of specific industrial needs, this procedure has to include a clustering phase and a milling simulation phase. Furthermore, given the context, calculation time is very important. Therefore, a surrogate model, based on Principal Component Analysis is defined. The approach is benchmarked with several test case surfaces, and results are presented and discussed.

Thursday, 16:50 – 18:30

TE-01: Optimization and Artificial Intelligence I
Stream: Optimization and Artificial Intelligence
Room: Fermat
Chair(s): Fabian Bastin

  • Exploiting Sparse Decision Trees To Learn Minimal Rules
    Tommaso Aldinucci Decision tree algorithms have been one of the most famous and used algorithms in machine learning since the early 1980’s. Given their intrinsic explainability, nowadays they are widely used in contexts where transparency is desired. In this work we propose a new heuristic strategy to extract minimal rules from decision trees. At first the algorithm builds a sparse structure then uses a local optimizer to improve the quality of the tree. We finally show that the rule set extracted from the final model can be often simplified using elementary rules of boolean logic.
  • Improved Forward Selection Algorithms for Sparse Principal Component Analysis
    Mustafa Pinar Sparse PCA is a well-known NP-Hard problem where the principal components obtained are desired to be sparse. This is useful in various studies due to its interpretability and robustness. In literature, heuristics or SDP relaxations are used to solve this problem. In this study, we propose multiple novel heuristics using eigenvalue approximation such as Gershgorin Circle’s to improve the performance of greedy forward selection methods. We have observed that our methods perform better in under-determined and high dimensional settings compared to the existent methods in the literature.
  • Kernel Regression with Hard Shape Constraints
    Pierre-Cyril Aubin, Zoltán Szabó Regression problems typically involve shape constraints due to qualitative priors or physical constraints, such as positivity or monotonicity. This writes as an infinite number of pointwise inequalities. Dealing with reproducing kernel Hilbert spaces, I describe how to solve a strengthened problem with a finite number of second-order cone constraints with bound guarantees, and application to joint quantile regression, to Engel’s law and to analyze vehicle trajectories. This approach is extended to linear-quadratic optimal control problems with state constraints.
  • Leveraging deep reinforcement learning through the framework of operations research
    Ysaël Desage, Fabian Bastin, François Bouffard While research in the field of artificial intelligence has been booming in recent year, especially in the field of machine learning, most applications are still in their infancy. In this perspective, we propose a hybrid innovative approach combining the performance and neologism of deep reinforcement learning with the reliable empirical and proven application framework of classical operations research. More concretely, we detail the proposed methodology before presenting the results on different problems and sub instances related to the power grid and the optimization of its actors.

TE-02: Beyond First-order Optimization Methods for Machine Learning – Part II
Stream: Advances in mathematical optimization for machine learning
Room: Turing
Chair(s): Fred Roosta, Albert Berahas

  • Full-low evaluation type methods for derivative-free optimization
    Luis Nunes Vicente, Albert Berahas, Oumaima Sohab We propose a new class of methods for DFO that considers two types of iterations. The first is expensive in fevals, but performs well in the smooth, non-noisy case. We considered BFGS computed over gradients approximated by finite differences. The second is cheap in fevals, more appropriate under noise or non-smoothness. We considered probabilistic direct search with 2 random directions. The resulting method is globally convergent in the non-smooth case and yields the appropriate rates in the smooth case. Numerical results showed that it is efficient and robust for all types of problems.
  • Newton-MR Algorithms With Complexity Guarantees for Non-convex Optimization
    Fred Roosta Classically, the conjugate gradient (CG) method has been the dominant solver in most inexact Newton-type methods for unconstrained optimization. In this talk, we consider replacing CG with the minimum residual method (MINRES), which is often used for symmetric but possibly indefinite linear systems. We show that MINRES has an inherent ability to detect negative-curvature directions. Equipped with this, we discuss line-search and trust-region variants of Newton-MR, which can be used for optimization of general non-convex objectives, and that come with favourable complexity guarantees.
  • Inexact Restoration with Subsampled Trust-region methods for finite-sum minimization
    Stefania Bellavia, Natasa Krejic, Benedetta Morini In this talk we will focus on subsampling second order trust-region procedures combined with the Inexact Restoration approach for finite-sum minimization. The sample size of function approximations is computed through a deterministic rule inspired by the inexact restoration method and the trust-region step is either accepted or rejected using a suitable merit function. We discuss the local and global properties for finding approximate first and second-order optimal points and show results from the numerical experience.
  • Retrospective Approximation for Stochastic Optimization
    Raghu Bollapragada We present Retrospective Approximation as a universal sequential sample-average approximation paradigm where during each iteration, a sample-path approximation problem is implicitly generated using an adapted sample size, and solved to an adapted error tolerance, using a “deterministic method” such as the line search quasi-Newton method. The principal advantage of RA is that decouples optimization from stochastic approximation, allowing the direct adoption of existing deterministic algorithms without modification, thus mitigating the need to redesign algorithms for the stochastic context.
  • Global optimization using random embeddings
    Estelle Massart, Coralia Cartis, Adilet Otemissov We present a general random subspace algorithmic framework for global optimization and analyse its convergence using tools from conic integral geometry and random matrix theory. We then particularise this framework and analysis for the class of functions with low effective dimension. We show that its convergence does not depend on the ambient dimension, and are able to estimate the effective dimension in the run of the algorithm. Encouraging numerical results are also presented that use local or global solvers in the subspace.

TE-03: Optimal Control and Applications I
Stream: Optimal Control and Applications
Room: Nash
Chair(s): Theodore Trafalis

  • Optimal control problems on stratified spaces
    Othmane Jerhaoui We are interested in control problems on stratified spaces. In such problems, the state variable space is partitioned in different manifolds with boundary glued together along their boundaries. Each manifold is associated with a compact control set, a controlled dynamics and a cost function. We are interested in analyzing the set of trajectories, and in characterizing the value function. Moreover, we will discuss some numerical schemes for such control problems. Finally, we discuss some generalizations of this problem on some metric spaces resembling this setting called Aleksandrov spaces.
  • Zero-Sum Deterministic Differential Game in Infinite Horizon with Continuous and Impulse Controls
    HAFID LALIOUI We consider a zero-sum deterministic differential game with two players adopting both continuous and impulse controls in infinite time horizon. The form of impulses supposed to be of general term (depends on non linear functions) and the cost of impulses being arbitrary (depends on the state of the system). We use the dynamic programming principle (DPP) and the viscosity solution approach to prove that the value function turns out to be, under Isaacs condition, the unique viscosity solution to the corresponding Hamilton-Jacobi-Bellman-Isaacs (HJBI) variational inequalities (QVIs).
  • Mixed Zero-Sum Stochastic Differential Game and Doubly Reflected BSDEs with a Specific Generator.
    Nacer OURKIYA This paper studies the mixed zero-sum stochastic differential game problem. We allow the functionals and dynamics to be of polynomial growth. The problem is formulated as an extended doubly reflected BSDEs with a specific generator. We show the existence of solution for this doubly reflected BSDEs and we prove the existence of a saddle-point of the game. Moreover, in the Markovian framework we prove that the value function is the unique viscosity solution of the associated Hamilton-Jacobi-Bellman equation.
  • Learning-Based Nonlinear H-infinity Control via Game-Theoretic Differential Dynamic Programming
    Wei Sun, Theodore Trafalis We present a learning-based nonlinear H-inf control algorithm that guarantees system performance under learned dynamics and disturbance estimate. The Gaussian Process regression is utilized to update the dynamics of the system and provide disturbance estimate based on data gathered by the system. A soft-constrained differential game associated with the disturbance attenuation problem in nonlinear H-inf control is then formulated to obtain the nonlinear H-inf controller. The differential game is solved through the Game-Theoretic Differential Dynamic Programming algorithm in continuous time.

TE-04: Advanced Optimization Methods I
Stream: Advanced Optimization Methods
Room: Lagrange
Chair(s): Marat Mukhametzhanov

  • Reduced-Order Parameter Optimization by Sequential ALIENOR method
    Daniele Peri Space Filling Curves (SFC) are a class of curves that completely covers a portion of a N-dimensional space asymptotically: an arbitrary position of the space is then associated with a single parameter (the curved abscissa along the SFC). This way, an N dimensional optimization problem can be converted in a 1D problem (ALIENOR method). In this paper a sequential approach using SFC is presented, managing hundreds of design variables: solution is obtained by iterativerly focusing on smaller and smaller portions of the feasible set. An industrial application is also presented.
  • Univariate Lipschitz Global Optimization Methods for Finding the Shape Parameter Value in Radial Basis Functions
    Marat Mukhametzhanov, Yaroslav Sergeyev Radial basis function (RBF) interpolation is considered in this talk. It is well-known that the value of the shape parameter of the RBFs has a strong impact on both the accuracy and stability of the results. It has been proposed recently to use univariate Lipschitz global optimization methods for finding an optimal value of the parameter. Locally-biased versions of the well-known “Divide-the-best” algorithms are presented for this purpose. Numerical experiments on several randomized and real-life test problems show the advantages of proposed the techniques.
  • On a new shape derivative formula based approach for a shape optimization problem with constrained coupled problems
    Azeddine SADIK, Abdesslam BOULKHEMAIR, Abdelkrim CHAKIB In this paper, we deal with numerical method for the approximation of a class of coupled shape optimization problem, which consist in minimizing an appropriate general cost functional subjected to coupled boundary value problems by means of a Neumann boundary transmission condition. We show the existence of the shape derivative of the cost functional and express it by means of support functions. Then the numerical discretization is performed using the dual reciprocity boundary element method. Finally, we give some numerical results,showing the efficiency of the proposed approach.
  • Piecewise Linear Bounding of the Euclidean Norm
    Aloïs Duguet, Christian Artigues, Laurent houssin, Sandra Ulrich Ngueveu In the field of piecewise linear approximation of nonlinear functions, few studies focused on the minimization of the number of pieces for a given error bound. In this talk we focus on the euclidean norm defined on a plane, and we first show how to use scalar products with regularly spaced unit vectors in order to compute piecewise linear approximations with the minimum number of pieces for a given relative error bound. Then we extend the approach to norms with elliptic level sets. An application to the beam layout problem validates the tractability of the method.

TE-05: Optimal Control and Optimization in Economics, Finance and Management II
Stream: Optimal Control and Optimization in Economics, Finance and Management
Room: Pontryagin
Chair(s): Gerhard-Wilhelm Weber

  • Risk measure and portfolio selection
    BEN HSSAIN LHOUCINE, Ghizlane Lakhnati This paper deals with risk measurement and portfolio optimization under risk constraints. Firstly, we give an overview of Extended Gini shortfall (EGS), a special type of Spectral Measures of Risk that extends the Gini-type measures of risk and variability. We subsequently considering the portfolio problems based on Extended Gini shortfall and gets some useful results.
  • Investigating the reliability of the RBF method for solving some optimal control problems in Dynamic Investment
    Ahmad Saeedi, Ahamd golbabai In this paper we have developed a MATLAB code based on Radial Basis Function (RBFs) for solving optimal control problems arising in Dynamic Economic Models. Some operational matrices has been produced which helps to reduce the computational cost. The proposed method has good accuracy but it suffers from ill-conditioning. In order to produce more reliable solutions, the effective condition number and some strategies including BiCGSTAB and GMRES iterative solution methods have been used.
  • Market making policy for high-frequency trading
    Francis Huot-Chantal, Fabian Bastin, Gabriel Yergeau We describe a quoting policy for market making based on inventory models optimization, leading to a threshold policy, in the context of high-frequency trading. In our model, the market maker aims to maximize his profit while providing liquidity to the stock market and keeping the risk at a reasonable level. We assess the policy performance in terms of profitability and robustness using backtesting and agent-based simulation. Various simulation frameworks are considered, using policies based on time intervals or discrete events, and a zero-intelligence policy as a baseline.

TE-06: Optimization under Uncertainty and Applications II
Stream: Optimization under Uncertainty and Applications
Room: Moreau
Chair(s): Riccardo Cambini

  • An integrated multi-echelon robust closed- loop supply chain under imperfect quality production
    Theodore Trafalis, Ismail Almaraj In this paper, we consider a novel closed loop supply chain design consisting of multiple periods and multiple echelons. The models are considered under imperfect quality production with multiple uncertainties to provide meaningful solutions to practical problems. In addition, we assume that the screening is not always perfect, and inspection errors are more likely to take place in practice. We measure the amount of quality loss as conforming products deviate from the specification (target) value. In our study, we develop three robust counterparts models based on box, polyhedral, and combined.
  • Robust Two-stage Polynomial Optimization
    Bissan Ghaddar, Olga Kuryatnikova, Daniel Molzahn In this work we consider two-stage polynomial optimization problems under uncertainty. We combine tools from polynomial and robust optimization to provide a framework for general adjustable robust polynomial optimization problems. In particular we propose an iterative algorithm to build a sequence of (approximately) robustly feasible solutions with an improving objective value and verify robust feasibility or infeasibility of the resulting solution under a semialgebraic uncertainty set. We implement our approach for a use-case in energy systems to show the performance of the proposed approach.
  • Hierarchical Fleet Mix Problems with risk-aversion: a CVaR approach
    Riccardo Cambini, Rossana Riccardi In this paper a two-stage stochastic hierarchical workforce model is studied from both a theoretical and an algorithmic point of view. In the considered hierarchical model workforce units can be substituted by higher qualified ones; external workforce can also be hired to cover unfulfilled jobs. Demand for jobs is assumed to be stochastic. The results of a computational test are provided in order to validate the model.

Friday

Friday, 9:00 – 10:40

FA-01: Mathematical Analysis of Optimization Methods I
Stream: Mathematical Analysis of Optimization Methods
Room: Fermat
Chair(s): David Müller

  • Decompositions and projections with respect to some classes of non-symmetric cones
    Jein-Shan Chen In this talk, we focus on issues of decompositions and projections w.r.t. some classes of non-symmetric cones. These concepts and the obtained results pave a way to deal with non-symmetric cone optimization.
  • Strong duality, boundedness and a theorem of the alternatives in conic optimization
    Akshay Gupte Besides strict feasibility, there are other sufficient conditions for strong duality in conic optimization that are somewhat less well-known, these being a closedness condition and boundedness of feasible region inside a pointed cone. We generalize the closedness condition and show that the other two follow as a consequence of it. We also give an alternate proof for the result that if one problem is strictly feasible then it has finite optimum if and only if the other problem is feasible. Finally, we also give a theorem of the alternative that generalizes what is known for proper cones.
  • Properties of Generalized Oriented Distance Function and its Applications to Set Optimization Problems
    Pradeep Kumar Sharma In this talk, we discuss several interesting properties of generalized oriented distance function with respect to co-radiant sets and free disposal sets, which are more general than a cone. In particular, we deal with some special properties, like, translativity, sub-additivity and monotonicity properties using co-radiant sets so that this function can be used as a coherent measure of investment in financial mathematics. As an application, we study some optimality conditions for quasi-minimal solutions for set optimization problems.
  • Discrete choice prox-functions on the simplex
    David Müller We derive new prox-functions on the simplex from additive random utility models of discrete choice. They are convex conjugates of the corresponding surplus functions. In particular, we explicitly derive the convexity parameter of discrete choice prox-functions associated with generalized extreme value models and specifically with generalized nested logit models. Incorporated into subgradient schemes, discrete choice prox-functions lead to natural probabilistic interpretations of the iteration steps. We also discuss an economic application of discrete choice prox-functions in consumer theory.

FA-02: Beyond First-order Optimization Methods for Machine Learning – Part III
Stream: Advances in mathematical optimization for machine learning
Room: Turing
Chair(s): Fred Roosta, Albert Berahas

  • Large-scale derivative-free optimization using subspace methods
    Lindon Roberts, Coralia Cartis In existing techniques for model-based derivative-free optimization, the computational cost of constructing local models and Lagrange polynomials can be high. As a result, these algorithms are not as suitable for large-scale problems as derivative-based methods. In this talk, I will discuss a model-based derivative-free algorithm based on exploration of random subspaces, its worst-case complexity bounds, and some numerical results.
  • Efficient Newton Methods for Robust Stochastic Nonconvex Optimization
    Thomas O’Leary-Roseberry, Nick Alger, Omar Ghattas In this talk we explore how low dimensional geometric information of Hessians arising in stochastic nonconvex optimization problems can be used to design matrix-free Newton methods that have similar per-iteration computational complexity to first order methods. These methods can reduce the dependence on hyperparameters and improve convergence. Further these methods can be adapted to guard against overfitting by using the Hessian to detect noise dominated geometric information of the stochastic energy landscape.
  • A stochastic second order-type extra-step scheme for nonsmooth nonconvex optimization
    Andre Milzarek, Minghan Yang, Zaiwen Wen, Tong Zhang We present a stochastic second order-type extra-step method for solving nonsmooth nonconvex optimization problems. We assume that gradient and Hessian information of the smooth part of the objective function can only be approximated. Our method combines stochastic higher order steps for an underlying optimality condition with stochastic proximal steps to ensure convergence in expectation. A variant of our approach using variance reduction is discussed and we show that it enjoys better convergence properties. Numerical results on logistic regression and sparse deep learning are provided.
  • Doubly Adaptive Scaled Algorithm for Machine Learning Using Second-Order Information
    Martin Takac, Majid Jahani, Sergey Rusakov, Zheng Shi, Peter Richtarik, Michael Mahoney This paper presents a novel adaptive optimization algorithm for large-scale machine learning problems. Equipped with a low-cost estimate of local curvature and Lipschitz smoothness, our method dynamically adapts the search direction and step-size. The search direction contains the gradient information well-scaled by a diagonal preconditioning matrix that captures the local curvature information. Our methodology does not require the tedious task of learning rate tuning, as the learning rate is updated automatically without adding an extra hyper-parameter.

FA-03: Equilibria, variational models and applications
Stream: Variational inequalities, Nash games, game theory, multilevel and dynamic optimization
Room: Nash
Chair(s): Mauro Passacantando

  • Multi-Leader-Follower Potential Games
    Steffensen Sonja We discuss a particular class of Nash games, where the participants of the game (the players) are devided into to groups (leaders and followers) according to their position or influence on the other players. Moreover, we consider the case, when the leaders’ and/or the followers’ game can be described as a potential game. This is a subclass of Nash games that has been introduced by Monderer and Shapley in 1996. We develop necessary and sufficient conditions for Nash equilibria and present existence and uniqueness results. Furthermore, we discuss some Examples to illustrate our results.
  • A Decomposition Algorithm for Nash Equilibria in Intersection Management
    Andreas Britzelmeier, Axel Dreves We consider autonomous vehicles that communicate in order to find optimal and collision free driving strategies. Resulting in coupled optimal control problems with nonconvex shared constraints and we consider a generalized Nash equilibrium reformulation of the problem. To handle the nonconvexity, we employ a partial penalty approach and reformulate the problem as a generalized potential game. We propose a decomposition algorithm with penalty selection to avoid a priori hierarchies. Using dynamic programming, we prove convergence of our algorithm. Providing numerical and experimental results.
  • Lagrange multipliers for a non-constant gradient constraint problem
    SOFIA GIUFFRE’ Aim of the talk is to show the existence of Lagrange multipliers associated with linear variational inequalities with a non-constant gradient constraint over convex domains. In particular, first we show the equivalence between the non-constant gradient constrained problem and a suitable obstacle problem, where the obstacle solves a Hamilton-Jacobi equation in the viscosity sense. Then, we obtain the existence of Lagrange multipliers associated to the problem. The results have been obtained, using the strong duality theory.
  • Radial solutions to a nonconstant gradient constraint problem
    Attilio Marcianò, SOFIA GIUFFRE’ Our work deals of radial solutions to a nonconstant gradient constraint problem in a ball of R^n and the Lagrange multipliers associated with the problem. The problem is formulated by means of a variational inequality and under a suitable assumption we obtain, for n = 2, a necessary and sufficient condition, characterizing the free boundary. In particular, this condition allow us to define the explicit solution and the Lagrange multiplier. Finally, some examples illustrate the results.

FA-04: Multiobjective Mixed Integer Optimization
Stream: Multiobjective Optimization
Room: Lagrange
Chair(s): Marianna De Santis, Gabriele Eichfelder

  • An objective space based algorithm for biobjective mixed integer programming problems
    Firdevs Ulus, Ozlem Karsu, Deniz Emre We propose an objective space based exact solution algorithm for biobjective mixed integer programming problems. The algorithm solves scalarization models in order to explore predetermined regions of the objective space called boxes, defined by two nondominated points. At each iteration of the algorithm, a box is explored either by a weighted sum or a Pascoletti-Serafini scalarization to determine nondominated line segments and points. We demonstrate the applicability of the algorithm through computational experiments.
  • Toward a Branch-and-Bound Algorithm for Biobjective Mixed Integer Quadratic Programs
    Margaret Wiecek Biobjective quadratic programs with mixed-integer variables (BOMIQPs) model decision making situations encountered in finance, economics, engineering, and other areas of human activity. We present the work we have accomplished so far toward development of a branch and bound (BB) algorithm for convex BOMIQPs. The algorithm consists of the following components: (i) algorithms for solving auxiliary single objective parametric quadratic programs; (ii) fathoming rules; (iii) branching rules; (iv) rules to establish (non)dominance between sets; (v) rules for updating the incumbent nondominated set.
  • Finding nondominated and efficient solutions of multiobjective integer quadratic programming problems
    Marianna De Santis, Gabriele Eichfelder We present a deterministic method for minimizing multiple quadratic objective functions over integer variables. Our method looks for efficient points by fixing subsets of variables to integer values and by using lower bounds in the form of hyperplanes in the image space derived from the continuous relaxations of the restricted objective functions. We show that the algorithm finds all efficient and all nondominated points after finitely many fixings of variables. Numerical examples on bi-objective instances as well as on instances with three and four objectives are shown.
  • Pareto Front Approximation through a Multi-objective Augmented Lagrangian Method
    Pierluigi Mansueto, Matteo Lapucci, Guido Cocchi In this work, we propose an extension of a multi-objective augmented Lagrangian Method from recent literature suitable for smooth convex constrained multi-objective optimization problems. The new algorithm is designed to handle sets of points and produce good Pareto front approximations, as opposed to the original one which converges to a single solution. We prove properties of Pareto stationarity global convergence for the sequences of points generated by our method. We then compare it with main state-of-the-art algorithms on some problems, showing its effectiveness and general superiority.

FA-05: Optimization and Artificial Intelligence II
Stream: Optimization and Artificial Intelligence
Room: Pontryagin
Chair(s): Sébastien Gerchinovitz

  • K-Quant: a non uniform post-training quantization algorithm
    Enrico Civitelli, Leonardo Taccari, Fabio Schoen Quantization is a simple yet effective way to deploy deep neural networks on resource-limited hardware. Post-training quantization algorithms are particularly interesting because they do not require the full dataset to run. In this work we explore a way to perform non uniform post-training quantization using an optimization algorithm to minimize the output differences between each compressed layer and the original one. The proposed method significantly reduces the memory required by the neural network without affecting the performance in terms of accuracy.
  • An Adaptive ML-Based Discretization Method for Computing Optimal Experimental Designs
    Philipp Seufert, Jan Schwientek, Tobias Seidel, Michael Bortz, Karl-Heinz Küfer Standard algorithms for the computation of optimal experimental designs (OED) consist of an inner point acquisition and an outer weight optimization. Whereas the latter is a convex problem, the inner one is a general non-convex non-linear program with implicitly given objective. We present a modification of the common OED solution approach which uses Bayesian optimization to adaptively form a grid of candidate points for determining the optimal design. We proved convergence of the algorithm to a locally optimal continuous design and obtained promising numerical results on real-world problems.
  • Sparse RBF Regression for the Optimization of Noisy Expensive Functions
    Alessio Sortino, Matteo Lapucci, Fabio Schoen Global optimization problems for black-box functions are usually addressed by building a surrogate model over the data and an acquisition function to decide where to place the next observation. When data are noisy the surrogate should not trust the latter too much. This typically introduces an extra hyperparameter into the model that corresponds to the variance of the noise. In this work we present a novel approach where a robust RBF-based surrogate model is built from the solution of a particular MIQP problem. Experimental results show the effectiveness of our approach w.r.t. existent methods
  • On the optimality of the Piyavskii-Shubert algorithm for global Lipschitz optimization: a bandit perspective
    Clément Bouttier, Sébastien Gerchinovitz, Tommaso Cesari We consider the problem of maximizing a non-concave Lipschitz multivariate function over a compact domain by sequentially querying its (possibly perturbed) values. We study a natural algorithm originally designed by Piyavskii and Shubert in 1972, for which we prove new bounds on the number of evaluations of the function needed to reach or certify a given optimization accuracy. Our analysis uses a bandit-optimization viewpoint and solves an open problem from Hansen et al. (1991), by bounding the number of evaluations to certify a given accuracy with a simple and optimal integral.

FA-06: Derivative-Free Optimization II
Stream: Derivative-Free Optimization
Room: Moreau
Chair(s): Clément Royer

  • Distributed Derivative-Free Optimization
    Vyacheslav Kungurtsev There are a number of situations in which multiple agents are seeking to cooperatively solve a central optimization problem using a combination of computations using local information and communication across a network modeled as a graph. In contexts arising from hyperparameter tuning or the cooperative control of functions defined by simulations, these problems are derivative free. In this talk we study the classical DFO algorithms adapted for distributed optimization. The theoretical properties and numerical performance is presented and assessed.
  • Derivative-free optimization for large-scale structured problems
    Andrea Cristofari, Francesco Rinaldi In this talk, a derivative-free algorithm is proposed to minimize a black-box objective function over the convex hull of a given set of points. At each iteration, the approach suitably handles a reduced problem, based on an inner approximation of the feasible set. This inner approximation is dynamically updated by using rules that in general allow us to keep the dimension of the problem small. Theoretical convergence results and numerical comparison with state-of-the-art DFO solvers will be presented.
  • A stochastic Levenberg-Marquardt method using random models
    Clément Royer, El houcine Bergou, Youssef Diouane, Vyacheslav Kungurtsev In this talk, we describe a stochastic Levenberg-Marquardt algorithm that handles noisy objective function values and random models. Provided the probability of accurate function estimates and models is sufficiently large, we are able to endow our proposed framework with global convergence rates. Our results rely on both a specific scaling of the regularization parameter and a measure of criticality tailored to least-squares problems. We compare our approach with several rates available for stochastic algorithms, and discuss the assumptions under which these schemes can be analyzed.
  • A Derivative-free Adaptation of the Penalty Decomposition Method for Sparse Optimization
    Matteo Lapucci, Marco Sciandrone We consider the problem of minimizing a smooth function with cardinality constraint. A well-known approach to tackle such prolem is the Penalty Decomposition (PD), which requires to exactly solve subproblems in the dimension of the original problem. In this talk, we present a derivative-free PD algorithm, where the exact minimization step is replaced by inexact line searches along suitable sets of directions. We state theoretical convergence properties of the proposed algorithm equivalent to those of the original gradient-based PD method. Finally, we show the results of preliminary experiments

Friday, 11:00 – 12:40

FB-01: Mathematical Analysis of Optimization Methods II
Stream: Mathematical Analysis of Optimization Methods
Room: Fermat
Chair(s): Jean-Baptiste Hiriart-Urruty

  • The simplest KKT and FJ optimality conditions via subdifferential calculus of pointwise suprema
    Abderrahim Hantoute We present first some new characterizations of the subdifferential of pointwise suprema. Compared to the previous results in the literature, the present ones are given only by means of the data functions. So, no need to consider the normal cone to the ddomain of the supremum nor to require intersections over finite-dimensional subspaces. Based on this, we develop some new KKT and FJ optimality conditions for a general convex optimization problem with (possibly) infinitely many constraints. The results discussed here are base on a recent work joint with M. A. López and R. Correa.
  • Optimality conditions for an exhausterable function on an exhausterable set
    Majid Abbasov Exhausters were proposed by V.F.Demyanov. These are families of convex compact sets that allow one to represent the directional derivative of the studied function at a point in the form of minmax or maxmin of linear functions. Functions for which such a representation is valid we call exhausterable. The set which is defined via exhausterable function is also called exhausterable. In the present work we describe optimality conditions for an exhausterable function on an exhausterable set. The reported study was supported by Russian Science Foundation (RSF), project No. 20-71-10032
  • Minimal sublinear functions and application to Cut Generating Functions
    alberto zaffaroni Given a convex set V, its recession hull consists in taking the intersection of all translates of the recession cone which contain V. We first study its main features and characterizations. Based on this, we study minimality of sublinear functions, among those for which the 1-lower level set has a prescribed recession cone. We prove that if F is recession minimal, then its lower 1-level set is regular, in the sense that it coincides with its recession hull. At last we apply the results above to Cut Generating Functions, and provide a complete characterization of minimal CGFs.
  • A fresh geometrical look at the general S-procedure
    Jean-Baptiste Hiriart-Urruty , Michel De Lara We revisit the S-procedure for general functions with “geometrical glasses”. We thus delineate a necessary condition, and almost a sufficient one, to have the S-procedure valid. Everything is expressed in terms of convexity of augmented sets (convex hulls, conical hulls) of images built from the data functions.

FB-02: Advances in Douglas-Rachford method – Part II
Stream: Advances in mathematical optimization for machine learning
Room: Turing
Chair(s): Cong Bang Vu, Dimitri Papadimitriou

  • Shadow Douglas-Rachford splitting
    Matthew Tam In this talk, I will introduce the shadow Douglas-Rachford method for finding a zero in the sum of two monotone operators where one is assumed to be single-valued and Lipschitz continuous. This algorithm naturally arises from a non-standard discretisation of a continuous dynamical system associated with the Douglas–Rachford splitting algorithm. More precisely, it is obtained by performing an explicit, rather than implicit, discretisation with respect to one of the operators involved. Each iteration of the proposed algorithm requires the evaluation of one forward and one backward operator.
  • The Cyclic Douglas-Rachford Algorithm with r-sets-Douglas-Rachford Operators
    Aviv Gibali, Francisco Javier Aragón Artacho, Yair Censor The Douglas-Rachford (DR) algorithm is an iterative procedure that uses sequential reflections onto convex sets and which has become popular for convex feasibility problems. In this talk we present a structural generalization that allows to use r-sets-DR operators in a cyclic fashion and also demonstrates great computational advantages over existing results.
  • Forward-partial inverse-half-forward splitting algorithm for solving monotone inclusions with applications
    Yuchao Tang In this paper, we propose a forward-partial inverse-half-forward splitting (FPIHFS) algorithm for finding a zero of the sum of a maximally monotone operator, a monotone Lipschitzian operator, a cocoercive operator, and a normal cone of a closed vector subspace. The FPIHFS algorithm is derived from a combination of the partial inverse method with the forward-backward-half-forward splitting algorithm. As applications, we apply it to solve the Projection on Minkowski sums of convex sets problem and the generalized Heron problem.
  • Multivariate Monotone Inclusions in Saddle Form
    Patrick Combettes, Minh Bui We propose a novel approach to monotone operator splitting based on the notion of a saddle operator. We study a highly structured multivariate monotone inclusion problem involving a mix of set-valued, cocoercive, and Lipschitzian monotone operators, as well as various monotonicity-preserving operations among them. This leads to an algorithm of unprecedented flexibility, which achieves full splitting, uses the specific attributes of each operator, is asynchronous, and requires to activate only blocks of operators at each iteration, as opposed to all of them. Applications are presented.

FB-03: Game theory, multilevel and dynamic optimization II
Stream: Variational inequalities, Nash games, game theory, multilevel and dynamic optimization
Room: Nash
Chair(s): Giancarlo Bigi

  • Near-Optimal Robust Bilevel Optimization
    Miguel F. Anjos, Mathieu Besançon, Luce Brotcorne We introduce near-optimal robustness for bilevel optimization problems to protect the upper-level decision-maker from bounded rationality at the lower level. We show that it is a restriction of the corresponding pessimistic bilevel problem. Essential properties are derived in generic and specific settings. This model has an intuitive interpretation in various situations cast as bilevel optimization problems. We develop a duality-based solution method for cases where the lower level is convex. The models obtained are successfully tested numerically using different solvers and formulations.
  • A Bilevel Optimization Model for Generative Adversarial Networks
    Zeynep Suvak, Miguel F. Anjos, Luce Brotcorne, Diego Cattaruzza A generative adversarial network is a framework to model an adversarial process where a generator produces data and a discriminator decides whether the data is real or fake. We focus on the example where the generator is a counterfeiter producing fake banknotes and the discriminator is the police trying to detect the fakes. This problem is formulated as a bilevel optimization problem where the counterfeiter determines the banknotes to be given to the police and the police decides which features to use to check authenticity. The details of the model and computational results will be presented.
  • Best response approaches for optimization problems with equilibrium constraints
    Maria Carmela Ceparano, Francesco Caruso, Jacqueline Morgan We consider a one-leader two-follower Stackelberg game where the leader’s payoff depends on the followers’ actions but the followers’ payoffs do not depend on the leader’s actions. We first present a theoretical method exploiting affine relaxations of the classical best response algorithm which globally converges to an equilibrium for a class of Stackelberg games where the uniqueness of the Nash equilibrium of the game played by the followers is ensured. Then, the convergence of a derivative-free numerical method will be shown, together with error bounds.
  • An explicit Tikhonov algorithm for nested variational inequalities
    Lorenzo Lampariello, Christoph Neumann, Jacopo Maria Ricci, Simone Sagratella, Oliver Stein We consider nested variational inequalities consisting in a (upper-level) variational inequality whose feasible set is given by the solution set of another (lower-level) variational inequality. Purely hierarchical convex bilevel optimization problems and certain multi-follower games are particular instances of nested variational inequalities. We present an explicit and ready-to-implement Tikhonov-type solution method for such problems. We give conditions that guarantee global strong convergence of the proposed method and we provide a convergence rate analysis.

FB-04: Numerical Methods for Multiobjective Optimization
Stream: Multiobjective Optimization
Room: Lagrange
Chair(s): Leo Warnow, Gabriele Eichfelder

  • An efficient descent method for locally Lipschitz multiobjective optimization problems
    Bennet Gebken In this talk, we propose a new descent method for MOPs with locally Lipschitz objective functions and afterwards combine it with the subdivision method to compute the whole Pareto set. The descent direction is based on epsilon-subdifferentials, which are iteratively enriched with gradient information until a satisfying direction is found. Combined with a modified Armijo step length, we prove convergence of the method to points that satisfy a necessary condition for Pareto optimality. Finally, the descent method is inserted into the subdivision method to compute the whole Pareto set.
  • Worst-case complexity bounds of directional direct-search methods for multiobjective derivative-free optimization
    Rohollah Garmanjani, Ana Luisa Custodio, Youssef Diouane, Elisa Riccietti Direct Multisearch (DMS) is a well-established class of algorithms for multiobjective derivative-free optimization. We analyze the worst-case complexity of this class of methods in its most general form, for the class of nonconvex smooth functions. We then focus on a particular instance of DMS, which considers a stricter criterion for accepting new nondominated points and has a worst-case complexity bound similar to the one obtained for gradient descent for the same class of problems (of the order of a threshold raised to the power -2, for driving a criticality measure below this threshold).
  • A norm-minimization based convex vector optimization algorithm
    Muhammad Umer, Firdevs Ulus, Cagin Ararat We propose an algorithm to generate inner and outer polyhedral approximations to the upper image of a bounded convex vector optimization problem. It is an outer approximation algorithm and is based on solving norm minimizing scalarizations, which differ from Pascolleti-Serafini scalarization in a sense that it does not involve a direction parameter. Therefore, the algorithm is free from direction biasedness. The finiteness of the algorithm is shown, and the convergence rate of the algorithm is studied under additional assumptions.
  • Computing a box-coverage for multi-objective optimization problems
    Leo Warnow, Gabriele Eichfelder In this talk, we present how to compute a box-coverage of the nondominated set of a continuous multi-objective optimization problem. To do so, we use an approach that, in general, allows us to update not only one but several boxes whenever a new nondominated point is found. We also present some key results for that algorithm. For example we show that we can guarantee an improvement of the boxes in each iteration and that the algorithm terminates in finite time with a finite number of boxes.

FB-05: Optimal Control and Applications II
Stream: Optimal Control and Applications
Room: Pontryagin
Chair(s): Björn Martens

  • A new approach for solving the multi-dimensional control optimization problems with first-order PDEs constraints
    Anurag Jayswal, Tadeusz Antczak, Preeti Kardam This paper aims to solve the multi-dimensional control optimization problem involving first-order PDEs constraints (MCOP). Firstly, we apply the modified objective function approach to (MCOP) and show that the solution sets of the original problem and its associated modified problem are equivalent. Further, we apply the penalty function method to transform the modified problem into an equivalent penalized problem. We also establish the relationship between a saddle-point of the modified problem and a minimizer of its penalized problem. We also give examples to verify the establish results.
  • Multimaterial topology optimization of a heated channel
    Alexandre Vieira The presentation will deal with a topology optimization problem in a heated channel. The problem, modelled using the incompressible Navier-Stokes equations coupled with the convection-diffusion equation, consists in minimizing pressure drop and maximizing heat transfer for some application in reduced mechanical air conditioning. The analysis focuses on the existence of a solution, the convergence of the numerical approximation using finite elements (including the convergence of the approximated optimum, using controls in BV), and the effect of using multiple materials to design the topology.
  • On Preconditioners for PDE-Constrained Optimization Problems with Higher-Order Discretization in the Time Variable
    Santolo Leveque, John Pearson Optimization problems with PDE constraints arise in numerous scientific applications. In this talk, we consider preconditioners for time-dependent PDE optimization, suitably discretized in time. The state-of-the-art is to apply a backward Euler method: this leads to desirable structures within the resulting matrix system, and effective preconditioners, but only a first-order accurate method. Here we present a second-order method, using Crank-Nicolson in time, and a new preconditioner for the more complex matrix. We show that this approach can obtain a more accurate solution in less CPU time.
  • Convergence analysis for approximations of optimal control problems subject to higher index differential-algebraic equations and mixed control-state constraints
    Björn Martens, Matthias Gerdts This paper establishes a convergence result for implicit Euler discretizations of optimal control problems with DAEs of higher index and mixed control-state constraints. The main difficulty of the analysis is caused by a structural discrepancy between the necessary conditions of the continuous problem and the necessary conditions of the discretized problems. This discrepancy does not allow one to compare the respective necessary conditions directly. We use an equivalent reformulation of the discretized problems to overcome this discrepancy and to prove first order convergence.

FB-06: Advanced Optimization Methods II
Stream: Advanced Optimization Methods
Room: Moreau
Chair(s): Jacek Gondzio

  • New Bregman proximal type algorithms for solving DC optimization problems
    Shota Takahashi, Mituhiro Fukuda, Mirai Tanaka Difference of convex (DC) optimization problems have objective functions that are differences of two convex functions. A proximal DC algorithm (PDCA) solves large-scale DC optimization problems, assuming the L-smoothness of objective functions. In this talk, using the Bregman distances, we propose a Bregman proximal DC algorithm (BPDCA) that does not require the L-smoothness. In addition, accelerating the BPDCA by the extrapolation technique, we propose the Bregman proximal DC algorithm with extrapolation (BPDCAe).
  • Parameter tuning of linear programming solvers
    Nikolaos Ploskas Linear programming solvers have a large set of parameters that allow users to control algorithmic aspects. Tuning solver options may have a considerable impact on solver performance. Previous efforts to tune solver parameters have used derivative-free optimization algorithms. In this work, we apply various derivative-free optimization solvers in order to find high quality tuning parameters for linear programming solvers. This work investigates how sensitive linear programming solvers are to a parameter tuning process. We present extensive computational results.
  • A modified barrier method with algebraic multigrid solver
    Alexander Brune, Michal Kocvara The Penalty-Barrier Multiplier (PBM) method, originally introduced by R. Polyak and later studied by Ben-Tal and Zibulevsky and others proved to be an efficient tool for solving very large scale optimization problems. The numerical bottleneck of the method (as of similar methods) lies in the repeated solution of a very large linear systems. We will show that for problems resulting from topology optimization of mechanical structures on irregular finite element meshes, algebraic multigrid can be used to solve problems of large dimension, unsolvable by direct methods.
  • A new stopping criterion for PCG applied in IPM
    Filippo Zanetti, Jacek Gondzio When Conjugate Gradient Method is used as a linear solver in the Interior Point Method, the attention is usually placed on accelerating its convergence by designing appropriate preconditioners, and the PCG is applied as a black box solver. We design and analyze a new specialized termination criterion for PCG applied in the context of IPMs. The new criterion has been tested on a set of linear and quadratic optimization problems including compressed sensing and image processing and has demonstrated consistent and significant improvements in terms of CG iterations and computational time.

Friday, 14:00 – 15:00

FC-01: Plenary 2 – Oliver STEIN
Stream: Plenary
Room: Fermat
Chair(s): Giancarlo Bigi

  • Granularity — a bridge between continuous and discrete optimization
    Oliver Stein In this talk we sketch the development of the granularity concept in optimization over the past five years. Granularity relaxes the difficulties imposed by integrality conditions and often provides ways for determining good feasible points of mixed-integer optimization problems at low computational cost. Starting from error bound results for roundings in mixed-integer linear optimization, we illustrate how this concept unfolded to provide algorithms for the computation of feasible points in mixed-integer linear, convex and nonconvex optimization. We also comment on the treatment of equality constraints and explain the integration of the granularity idea into branch-and-bound frameworks.

Friday, 15:15 – 16:30

FD-01: Derivative-Free Optimization III
Stream: Derivative-Free Optimization
Room: Fermat
Chair(s): Youssef Diouane

  • Optimization of noisy blackboxes with adaptive precision
    Pierre-Yves Bouchet, Charles Audet, Sébastien Le Digabel, Stephane Alarie In derivative-free optimization, the objective may be evaluated through a computer program which returns a stochastic output with tunable variance. As a low variance leads to an important cost per evaluation, we propose an approach which allows the variance to remain high. Some satisfying results on an industrial problem are given.
  • A derivative free methods for mixed-integer nonsmooth constrained optimization problems.
    Stefano Lucidi, Tommaso Giovannelli, Giampaolo Liuzzi, Francesco Rinaldi In this paper, we propose new linesearch-based methods for mixed-interger nonsmooth constrained optimization problems when first-order information on the problem functions is not available. First, we describe a general framework for mixed integer bound constrained problems. Then we use an exact penalty approach to tackle the presence of nonlinear (possibly nonsmooth) constraints. We analyze the global convergence properties of all the proposed algorithms toward Clarke stationary points and we report results of a preliminary numerical experience on a set of mixed-integer problems.
  • A Merit Function Approach for Evolution Strategies
    Youssef Diouane In this talk, we present a class of globally convergent evolution strategies for solving relaxable constrained optimization problems. The proposed framework handles relaxable constraints using a merit function approach combined with a specific restoration procedure. The introduced extension guarantees to the regarded class of evolution strategies global convergence properties for first order stationary constraints. Comparison tests are carried out using two sets of known constrained optimization problems.

FD-02: Advances in mathematical optimization for machine learning and data analysis – Part III
Stream: Advances in mathematical optimization for machine learning
Room: Turing
Chair(s): Dimitri Papadimitriou

  • Mixed integer optimization in ARMA models
    Leonardo Di Gangi Model selection and fitting represent two critical aspects of Auto Regressive Moving Average (ARMA) models. It is proposed an algorithm which performs the selection and estimation of ARMA models as a single optimization routine without any statistical knowledge. The computational core of the algorithm is based on a two-step Gauss-Seidel decomposition scheme, where at each iteration the first step involves the update of the autoregressive and moving average parameters by solving a Mixed Integer Optimization problem and the second step the closed form update of the variance parameter.
  • Model of Optimal Centroids Approach For Multivariate Data Classification
    Pham Van Nha, Le Cam Binh Particle swarm optimization-PSO is a well-known multidisciplinary optimization algorithm. However, the general mathematical model of PSO has not been presented. In this paper, PSO will be presented as a general mathematical model and applied in multivariate data classification. First, PSO’s the general mathematical model is analyzed so that can be applied into complex applications. Then, Model of Optimal Centroids-MOC is proposed for multivariate data classification. Experiments were conducted on some data sets to demonstrate the effectiveness of MOC compared with some proposed algorithms
  • IGLOO: A stochastic global optimization algorithm to predict the structure of biomolecules adsorbed on metal surfaces
    Juan CORTES, Nathalie TARRAT, Christian Schoen Predicting conformations of molecular systems constitutes a global optimization problem for an appropriate cost (energy) function, where not only the global minimum but also a representative set of local minima is expected as output. Depending on the system size and the complexity of the energy function, solving this optimization problem can be extremely challenging. Our proposed Iterative Global exploration and LOcal Optimization (IGLOO) algorithm iterates RRT-based sampling, local minimization and clustering, until convergence is achieved. We show its performance in a real application.

FD-03: Advances in Operator Splitting Techniques
Stream: Variational inequalities, Nash games, game theory, multilevel and dynamic optimization
Room: Nash
Chair(s): Mathias Staudigl

  • On optimal gradient methods and oracle complexity for smooth (possibly strongly) convex minimization
    Adrien Taylor In this talk, we want to present the (definitely) optimal (black-box) gradient method for smooth strongly convex minimization, which we called the Information-Theoretic Exact Method (ITEM). This method was obtained through semidefinite programming, and we humbly believe it provides some nice insights on the topic of accelerated methods. On the way, we will discuss the corresponding (matching) lower complexity bounds and a constructive procedure for obtaining them, as well as a few links between accelerated and conjugate gradient methods. This talk is based on joint works with Yoel Drori.
  • The sample complexity of level set estimation
    François Bachoc, Tommaso Cesari, Sébastien Gerchinovitz The problem of approximating the level set of a real function f of many real variables arises naturally in many fields, including physics, engineering, and computer science. We investigate the minimum number of evaluations of f needed to output a guaranteed approximation of such level set. We study the general case of Lipschitz functions f as well as other, smaller function spaces. In all the cases we consider, we show that such sample complexity is exponential in the number of variables. Additionally, we provide algorithms and intuitions on how to attain optimal sample complexity rates.
  • Multi-block Bregman proximal alternating linearized minimization for structured nonsmooth nonconvex problems
    Masoud Ahookhosh, Le Thi Khanh Hien, Nicolas Gillis, Panagiotis Patrinos We introduce BPALM and A-BPALM, two multi-block proximal alternating linearized minimization algorithms using Bregman distances for the sum of a multi-block relatively smooth function and block separable (nonsmooth) nonconvex functions. Our algorithms are globally convergent to critical points of the objective function under KL assumption, and their rate of convergence is studied for Lojasiewicz-type KL functions. We apply this framework to orthogonal nonnegative matrix factorization (ONMF) that satisfies all of our assumptions and the related subproblems are solved in closed forms.

FD-05: Optimal Control and Optimization in Economics, Finance and Management III
Stream: Optimal Control and Optimization in Economics, Finance and Management
Room: Pontryagin
Chair(s): Diogo Pinheiro

  • Stochastic Maximum Principle with Regimes and Memory
    Emel Savku, Gerhard-Wilhelm Weber We study a stochastic optimal control problem for a delayed Markov regime -switching jump-diffusion model. We establish necessary and sufficient maximum principles for such a system.We prove the existence– uniqueness theorem for the adjoint equations, which are represented by an anticipated backward stochastic differential equation with jumps and regimes. We illustrate our results by a problem of optimal consumption problem from a cash flow with delay and regimes.
  • Portfolio Optimization with Drift Uncertainty
    Kerem Ugurlu We study the utility maximization problem of a portfolio of one risky asset, a stock, and one riskless asset, a bond, under Knightian uncertainty on the drift term representing the long term growth rate of the risky asset. We further assume that the investor has a prior estimate about the drift term, so that we incorporate into the model a penalty term for deviating from the prior about the mean. We provide explicit solutions, when the investor has logarithmic, power and exponential utility functions.
  • Two-player zero-sum stochastic differential games with Markov-switching jump-diffusion dynamics
    Diogo Pinheiro, Miguel Ferreira, Susana Pinheiro We consider a two-player zero-sum stochastic differential game with Markov-switching jump-diffusion state variable dynamics. We study this game using a combination of dynamic programming and viscosity solution techniques. Under some mild assumptions, we prove that the value of the game exists and is the unique viscosity solution of a certain nonlinear partial integro-differential equation of Hamilton-Jacobi-Bellman-Isaacs type.

FD-06: Applications of Optimization IV
Stream: Applications of Optimization
Room: Moreau
Chair(s): Gabriella Colajanni

  • Volumetric Uncertainty Bounds and Optimal Configurations for Converging Beam Triple LIDAR
    Anthony Brooms, Theodore Holtom We consider the problem of forward uncertainty propagation for the converging beam triple LIDAR technology, used for measuring wind velocity passing through a fixed point in space. The size of the volumetric output uncertainty is related to the inverse of the volume of a parallelepiped of unit edge length, delineated by the Doppler LIDAR configuration. Optimal configurations for minimizing output uncertainty are discussed, whilst a grid search procedure for optimizing the value of the parallelepiped volume, subject to LIDAR orientation uncertainty constraints, is presented.
  • Optimal Incentive in Electric Vehicle Adoption
    Giorgio Rizzini We study a bilevel model with a policymaker and a population of vehicle owners. The policymaker minimizes a cost function deciding the incentive to encourage the largest possible percentage of the fossil-fueled vehicle owners to buy an electric one. All players care about PM10 concentration. The policymaker imposes a traffic ban if the PM10 concentration exceeds a safety threshold. Traffic bans generate a cost to the owners of a fossil-fueled vehicle. We reduce the initial bilevel formulation to a single level problem, which is solved analytically.
  • An Optimization Model to manage UAVs in multitiered 5G network slices
    Daniele Sciacca, Gabriella Colajanni In this paper, we present a three-tier supply chain network model consisting of a fleet of UAVs organized as a FANET managed by a fleet of UAVs controllers, whose purpose is to provide 5G network slices on demand to users and devices on the ground. Our aim is to provide a system optimization perspective for the entire supply chain network, obtaing a constrained optimization problem for which we derive the associated variational inequality formulation. Also, qualitative properties in terms of existence and uniqueness of solution are provided.

Friday, 16:45 – 17:15

FE-01: Closing
Stream: Closing
Room: Fermat

Theme: Overlay by Kaira