Plenary Speakers
 

Ulrike Reisach - Artificial Intelligence – How Ethics and Governance Could Contain the Manipulation of Societies

Artificial Intelligence (AI) in this plenary comprises machine learning systems for tracking, profiling, forecasting and making recommendations. Due to their ability to learn and evolve dynamically, they are much faster and more powerful than previous digital tools.  This can facilitate both intended and unintended outcomes. It particularly applies to search engines and social media, where a broad public is addressed and where bots can multiply the effects. For example, the Brexit referendum, and the presidential elections in the USA and Brazil, have started a discussion about whether and how machine learning techniques could influence citizens’ decisions.

The potential impact of machine learning algorithms on the choices and opinions of users of search engines and online media, within and across borders, is discussed in this plenary. Endogenous and exogenous factors, which have the potential to influence the process, quality and outcomes of decision making are explained. Examples will show how organizations and their designers, architects and developers can ensure compliance with ethical and social norms, and create trust in their systems among users and other stakeholders who might be directly or indirectly affected. Ethics codes and governance approaches will be discussed, as well as their perception and effectiveness in cyber systems, within the political frameworks of the EU, the USA and China.

The plenary concludes by exhorting the operational research community to make the public more aware of the power of Artificial Intelligence, and how understanding it can help societies protect themselves from societal and political manipulation.

 
 

William Cook - The Traveling Salesman Problem: POSTCARDS FROM THE EDGE OF IMPOSSIBILITY

IFORS Dintinguished Lecturer

Given a collection of points, the TSP asks for the shortest route to visit them all. Simple enough. But even a whisper of the problem strikes fear in the heart of the computing world. Last year, a Washington Post article reported it would take "1,000 years to compute the most efficient route between 22 cities." This claim, however, ignores 70 years of intense study in the OR community. A 22-city TSP can be handled in a snap with modern algorithms, even on an iPhone. Going larger, we describe techniques that have been used to solve to precise optimality examples with nearly 50,000 points and Google Map walking distances. And if you need to visit the nearest 2,079,471 stars, there is a route, ready to go, that is guaranteed to be no more than 1.00002 times longer than a shortest possible tour.

 

This work follows a long line of computational research going back to Julia Robinson in 1949 and Dantzig, Fulkerson, and Johnson in 1954. The general setting is the following. Complexity theory suggests there are limits to the power of general-purpose computational techniques, in engineering, science and elsewhere. But what are these limits and how widely do they constrain our quest for knowledge? The TSP can play a crucial role in this context, demonstrating whether or not focused efforts on a single, possibly unsolvable, model will produce results beyond our expectations.

 

We will discuss the history of the TSP and its applications, together with recent computational efforts towards exact and approximate solution methods. The talk is based on joint work the David Applegate, Daniel Espinoza,Marcos Goycoolea, and Keld Helsgaun.

Dick den Hertog - What every OR practitioner should know about Robust Optimization

In this presentation we explain the core ideas in robust optimization and show how to successfully apply them in practice.   Real-life optimization problems often contain parameters that are uncertain, due to, e.g., estimation or implementation errors. The idea of robust optimization is to find a solution that is immune against these uncertainties. The last two decades efficient methods have been developed to find such robust solutions. The underlying idea is to formulate an uncertainty region for the uncertain parameters for which one would like to safeguard the solution. In the robust paradigm it is then required that the constraints should hold for all parameter values in this uncertainty region. It can be shown that, e.g., for linear programming, for the most important choices of the uncertainty region, the final problem can be reformulated as linear optimization or conic quadratic optimization problems, for which very efficient solvers are available nowadays. Robust Optimization is valuable for practice, since it can solve large-scale uncertain problems and it only requires crude information on the uncertain parameters. Some state-of-the-art modeling packages have already incorporated the robust optimization technology.

In this tutorial we restrict ourselves to linear optimization. We will treat the basics of robust linear optimization, and also show the huge value of robust optimization in (dynamic) multistage problems. Robust optimization has already shown its high practical value in many fields: logistics, engineering, finance, medicine, etc. In this tutorial we will discuss some of these applications. 

 
Tutorial Speakers

Anna Nagurney - Game Theory and Variational Inequalities: From Transportation and Supply Chains to Financial Networks and the Internet

In this tutorial I will present the fundamentals of the theory of variational inequalities with a focus on game theory and numerous network-based topical applications, beginning with congested urban transportation networks, which have served as the foundation for many methodological advances. Multitiered supply chain networks with applications as varied as blood supply chains in healthcare to fresh produce supply chains and disaster relief supply chains will also be highlighted. I will then proceed to discuss financial networks, draw analogies to other network systems, including the Internet, and demonstrate how they are also the subject of numerous cyberattacks. Recent results from game theory and variational inequalities will be covered in the setting of cybersecurity investments and vulnerability analysis to contrast and compare noncooperation versus cooperation in major industries, notably, in retail and in financial services.

 

The audience will learn about variational inequality theory as a powerful tool for the modeling, analysis, and computation of solutions to a breadth and depth of applications in which there are multiple interacting decision-makers.

 
 
 
 

Bernd bischl - Bayesian Optimization and Automatic Machine Learning

The talk will cover the main components of sequential model-based optimization algorithms a.k.a Bayesian optimization. Algorithms of this kind represent the state-of-the-art for expensive black-box optimization problems and sequential planning of experiments and are getting increasingly popular for hyper-parameter optimization and automatic machine learning (AutoML). After covering the fundamentals, I will outline some extensions w.r.t. to parallel point acquisition and multi-objective optimization. The talk will conclude with the discussion of some open challenges. 

 

Claudia D'Ambrosio - Challenging Optimization Problems in Energy Production and Transportation

Management of energy systems is one of the biggest challenges of our time. The daily demand for energy increases constantly for many reasons. Moreover, the wide use of renewable energies, aimed at limiting polluting emissions, can create instability in the networks and uncertainty in energy production. In this context, Operational Research is a crucial tool that allows optimizing strategic, tactical, and operational decisions to be taken. After an overview of the most challenging problems in energy systems, we will focus on two among them: the hydro unit commitment and the phasor measurement units placement in smart grids.

 

The former is a production problem that is solved daily by utility companies. By its nature, it can be modeled as a mixed integer nonlinear program. Several objectives can be considered. We discuss different formulations and algorithmic solutions to tackle them, from the classical ones based on linearization techniques to more recent graph formulations. The latter is a strategic problem aimed at making the grid observable, thus smarter. We introduce several equivalent formulations and discuss their advantages, drawbacks, and some tailored algorithms.

 

Holger Hoos - Programming by Optimisation: Automated algorithm configuration, selection and beyond

In recent years, there has been a significant increase in the use of automated algorithm design methods, such as automated algorithm configuration and portfolio-based algorithm selection, across many areas within operations research, artificial intelligence and beyond. These methods are based on cutting-edge machine learning and optimisation techniques; they have also led to substantial advances in those areas.

 

In this tutorial, I will give an overview of these automated algorithm design methods and introduce Programming by Optimisation (PbO), a principled approach for developing high-performance software based on them. I will explain how PbO can fundamentally change the nature of developing solvers for challenging computational problems and give examples for its successful application to a range of prominent problems from OR and AI - notably, mixed integer programming, the travelling salesman problem, AI planning, automated reasoning and machine learning.
 

Ivana Ljubic - From Game Theory to Graph Theory: A Bilevel Journey

In bilevel optimization there are two decision makers, commonly denoted as the leader and the follower, and decisions are made in a hierarchical fashion: the leader makes the first move, and then the follower reacts optimally to the leader’s action. It is assumed that the leader can anticipate the decisions of the follower, hence an optimal solution for the leader can be obtained by solving a nested optimization problem that takes into account the follower’s best response.

 

In this talk we focus on new branch-and-cut (B&C) algorithms for dealing with mixed-integer bilevel linear programs (MIBLPs). We first address a general case in which intersection cuts are used to cut off infeasible solutions. We then focus on a subfamily of MIBLPs in which the leader and the follower share a set of items, and the leader can select some of the items to inhibit their usage by the follower. Interdiction Problems, Blocker Problems, Most-Vital Node/Edge Detection Problems are some examples of optimization problems that satisfy the later condition. We discuss implementation of a generic, efficient B&C scheme that relies on the separation of “interdiction-cuts”. 

 

These new B&C algorithms consistently outperform (often by a large margin) alternative state-of-the-art methods from the literature, including methods that exploit problem specific information for special instance classes.

Leon PetrosYan - Time-Consistency Problem in Control Theory and Dynamic Games

The solution of dynamic optimization problem is  time-consistent if the use of this solution policy to a sub problem with a later starting time and state brought about by prior optimal behavior would remain optimal in the sub problem.  In classical optimization problems solvable with dynamic programming technique the optimal solution is time-consistent.  In multicriterial control problems Pareto-optimal solutions also are time-consistent, but different selection procedures for choosing of one special solution from the set of all Pareto- optimal solutions are in general time-inconsistent (for example the NB-solution) .  In differential and dynamic games Nash equilibrium is a time-consistent solution, but practical all solution concepts  taken from classical one-shot cooperative game theory are time inconsistent (core,  Shapley value, nucleolus). We present examples of time-consistent and time-inconsistent solution concepts in mulicriterial control and dynamic games and propose refinement procedures  to make solutions time-consistent.

Nico Vandaele - Humanitarian Operations Modeling at the Interplay of Relief and Development

In this tutorial we explore a personal view on how OR modeling can be helpful in supporting humanitarian and development decision making. We rely on the application field of immunization to illustrate our findings. Immunization systems have to cope both with prevention (as part of development) as well as with outbreaks (as part of disaster relief). 

 

The starting point is a user-centered view; in this case a patient/child/population needs to be immunized. This led us to a five step approach, encapsulating the modeling of the problem setting. It is composed of (1) stakeholder analysis and system definition, (2) key performance indicators derivation, (3) modelling and scenario generation, (4) scenario ranking and (5) implementation. This baseline allows for complementary OR techniques and modeling approaches to be deployed. Recent literature reviews reveal multiple opportunities for research.

 

Relevance, resilience, sustainability and technological evolutions enforce us to opt for an interdisciplinary approach. Many of the insights from the immunization field are useful for other fields of decision support.

 

Nicole Adler - Applying game theory to analyze air transport markets

Applying game theory to analyse markets has proven to be a useful methodology to shed light on the aviation supply chain. From airlines, enabling hub-spoke network design to codesharing and alliance formation, to airports, enabling capacity decisions and passenger flows within a terminal, to air traffic control provision, enabling collaborative decision-making and the implementation of new technologies. I will discuss the underlying threads connecting many of the models in the field and then concentrate on a recent Horizon2020 project called COMPAIR. Within this project, we develop a two-stage network congestion game whereby regulators, air traffic control providers and airlines interact. The two stage game utilizes profit maximization and cost minimization models combined with discrete choice functions in order to estimate the best response strategies of all actors in the market. We debate whether it is possible to introduce competition for the market in air traffic control in Europe and the likely outcomes. The game considers an auctioning process in which the regulator sets rules a-priori and the service providers bid for the right to serve the market. We test the likely equilibria outcome if the companies are for-profit or non-profit and the results suggest that introducing competition for the market via outsourcing service provision may reduce charges by up to half the current levels. It would also appear that auctioning the service is likely to defragment the European market as companies win more than one auction. Finally, for-profit companies are highly likely to invest in new technologies thus encouraging adoption faster than appears to be occurring today, helping to accomplish the goals of the single European Skies initiative.

 
 

Piotr Faliszewski - Committee Elections: Applications, Axioms, and Algorithms

In this talk I will give a short introduction to multiwinner elections. The idea of a multiwinner election is that a group of agents form opinions on a group of available candidates and based on these opinions---referred to as preferences---their goal is to select a committee of a given size. While multiwinner elections are often considered in the context of political elections (e.g., in terms of choosing parliaments or university senates), they form a very convenient formalims for modelling a number of other activities, such as selecting finalists of competitions, selecting products to offer on an Internet store's website, or selecting movies to offer on transatlantic flights. In this talk we will present the formalism of multiwinner elections, several voting rules, their axiomatic properties, connections between these properties and particular applications, and algorithms for computing election winners. This last issue is particularly important as computing results of multiwinner elections is often NP-hard, but there are very efficient workarounds, that drastically lower the complexity.

 
 

Roberto Cominetti - Routing games in congested networks: convergence for large games and the Price-of-Anarchy

We revisit the concept of Wardrop equilibria for routing games, as a continuous approximation for finite games with an increasing number of small players.  We consider two scenarios: a deterministic model in which players have a small  weight, and also a stochastic scenario in which players have non-negligible weight  but are present with a small probability. In both cases we provide a formal proof  of convergence towards two different Wardrop models, where in the second case  the flows in the network retain their stochastic nature and are described by Poisson  distributions. We also discuss some recent results regarding the behavior of the  Price-of-Anarchy under these different regimes, both at intermediate levels of  congestion as well as in the limit when the network becomes heavily congested. 

Sally Brailsford - Introduction to hybrid simulation

This tutorial provides a basic introduction to hybrid (or multi-paradigm) simulation, a modelling approach that combines two or more of the simulation methods discrete-event simulation, system dynamics, and agent-based simulation.  Although simulation models containing both discrete and continuous variables have been developed since the 1960s, genuine hybrid simulation - as the term is understood today - only really came into existence in the 1990s.   Initially it was mainly used within the disciplines of computer science and engineering, but in recent years there has been a rapid growth in interest among the operational research (OR) community.  This is due in part to the wider availability and greater user-friendliness of commercial software for developing hybrid models, but the main reason for its increasing popularity among OR modellers is its ability to capture multiple aspects of complex real-world problems by exploiting the complementary strengths of the different methods. This talk provides a practical overview of the topic illustrated by some examples from the field of healthcare, presents some findings from a recent review of the literature from an OR perspective, and identifies promising areas for future research.

Sheetal Silal - Evidence-based tools to support decision making for malaria elimination

With increases in computing power and renewed efforts in the global fight against diseases like malaria, dengue and TB, mathematical modelling is fast becoming a tool to guide public health policy. This talk will focus on the application of mathematical modelling and computer simulation to predict the dynamics of malaria to evaluate the potential impact of policy in reducing morbidity and mortality. Particular attention will be paid to the development of tools to communicate modelling results and support decision-making. Two case studies will be presented: (1) Malaria Elimination and Costing in the Asia-Pacific; a study of malaria dynamics in the 22 countries of the Asia-Pacific and (2) Malaria Strategy Design; supporting the development of a focal malaria elimination strategy. These tools aim to make mathematical models more accessible to policy-makers and serve to bring quantitative tools into the decision-making process.