Satellite workshop 2: Learned methods for operations research

This workshop is part of the broader CWI research semester programme on learning enhanced optimization, contributing to its overarching mission of advancing cutting-edge research in theoretical computer science, operations research and beyond. In addition, it is the final workshop of the OPTIMAL project optimization for and with machine learning.

When
3 Nov 2025 from 9:30 a.m. to 6 Nov 2025 12:30 p.m. CET (GMT+0100)
Where
Turing Hall, CWI, Science Park 125, Amsterdam, Netherlands.
Add

research-semester-programme-g

Background

Optimization problems appear everywhere in society. Think for example of vehicle routing, chip manufacturing, energy networks, pricing and drug design. Similar problems also appear in other scientific disciplines, for example biology and physics. The field of Operations Research aims to develop the mathematics and algorithmics needed to solve such optimization problems efficiently in practice. Traditionally, methods were developed purely based on mathematical models and analyzed with either worst-case scenarios or specific practical data sets. Recent research focuses on using large amounts of data in the development of optimization algorithms to develop methods that work well (in terms of accuracy as well as efficiency) for all or almost all data sets that may arise in practice. To achieve this, it is necessary to use machine learning within traditional optimization methods. Doing so, new challenges arise such as fairness, explainability, robustness and privacy.

About the workshop

This four-day workshop will bring together leading researchers in the field to discuss recent advancements, explore key challenges, and foster new collaborations.

The workshop aims to strengthen and widen research on all types of optimization, combined with any form of machine learning, both on a fundamental level as well as on direct applications, with an emphasis on the responsible use of machine learning. The main focus is on the use of machine learning within the algorithm design.

The programme will feature keynote lectures, contributed and lightning talks, and provide ample time for research discussions.

.....................................................................................................................

Keynote Speakers – Biographies & Lecture Information

Andrea Lodi is an Andrew H. and Ann R. Tisch Professor at the Jacobs Technion-Cornell Institute at Cornell Tech and the Technion. He is a member of the Operations Research and Information Engineering field at Cornell University.

He received his Ph.D. in system engineering from the University of Bologna in 2000 and he was a Herman Goldstine Fellow at the IBM Mathematical Sciences Department, N.Y. in 2005–2006. He was a full professor of operations research at DEI, the University of Bologna between 2007 and 2015. Since 2015, he has been the Canada Excellence Research Chair in “Data Science for Real-time Decision Making” at Polytechnique Montréal.

Lecture: The differentiable Feasibility Pump
Although nearly 20 years have passed since its conception, the feasibility pump algorithm remains a widely used heuristic to find feasible primal solutions to mixed-integer linear problems. Many extensions of the initial algorithm have been proposed. Yet, its core algorithm remains centered around two key steps: solving the linear relaxation of the original problem to obtain a solution that respects the constraints, and rounding it to obtain an integer solution. This paper shows that the traditional feasibility pump and many of its follow-ups can be seen as gradient-descent algorithms with specific parameters. A central aspect of this reinterpretation is observing that the traditional algorithm differentiates the solution of the linear relaxation with respect to its cost. This reinterpretation opens many opportunities for improving the performance of the original algorithm. We study how to modify the gradient-update step as well as extending its loss function. We perform extensive experiments on MIPLIB instances and show that these modifications can substantially reduce the number of iterations needed to find a solution. (Joint work with M. Cacciola, Y. Emine, A. Forel, A. Frangioni).

Bart van Parys is a senior researcher in the stochastics group at the national research institute for mathematics and computer science (CWI) in Amsterdam, the Netherlands. My research is located on the interface between optimization and machine learning. In particular, I develop novel mathematical methodologies and algorithms with which we can turn data into better decisions. Although most of my research is methodological, I do enjoy applications related to problems in renewable energy.

Lecture: Optimal Decisions with Messy Data and Uncertainty
Problem uncertainty typically limits how well decisions can be tailored to the problem at hand but often can not be avoided. The availability of large quantities of data in modern applications however presents an exciting opportunity to nevertheless make better informed decisions. Capitalizing on this opportunity requires developing novel tools on the intersection between operations research, stochastics as well as data science. In a modern setting the primitive describing uncertainty is often messy data rather than classical distributions.
Simply quantifying the probability of an undesirable outcome becomes a challenging uncertainty quantification problem which I approach with a distributional optimization lens. Distributional robust optimization (DRO) has recently gained prominence as a paradigm for making data-driven decisions which are protected against adverse overfitting effects. We justify the effectiveness of this paradigm by pointing out
that certain DRO formulations indeed enjoy optimal statistical properties. Furthermore, DRO formulations can also be tailored to efficiently protect decisions against overfitting even when working with messy corrupted data. Finally, as such formulations are often computationally tractable they provide a practical road to the development of tomorrow's trustworthy decision systems."

Çağıl Koçyiğit is an Associate Professor at the Luxembourg Centre for Logistics and Supply Chain Management, University of Luxembourg. Her research focuses on optimization under uncertainty with applications in economics, policy and mechanism design. Prior to joining the University of Luxembourg, she received a Ph.D. in Management of Technology from École Polytechnique Fédérale de Lausanne (EPFL) in 2020, where she worked with Daniel Kuhn. She holds a B.Sc. degree and an M.Sc. degree in Industrial Engineering from Bilkent University.

Lecture: Integrating Machine Learning and Optimization for Efficient, Fair, and Interpretable Allocation of Scarce Resources
This talk focuses on the problem of homelessness and presents research that integrates machine learning and optimization to address a key question: how can scarce housing resources be allocated to mitigate homelessness while promoting fairness and transparency? Leveraging administrative data collected in deployment, we design policies that are efficient, fair, and interpretable, and establish theoretical guarantees of optimality.  When evaluated on real data, our policies improve rates of exit from homelessness by 5.16% and policies that are fair in either allocation or outcomes by race come at a very low price of fairness. We will conclude the talk with a brief discussion of future research directions and other societal domains where data-driven optimization can help improve the lives of vulnerable individuals.

Dimitris Bertsimas is the current Vice Provost for Open Learning, the Associate Dean of Online Education and Artificial Intelligence, the Boeing Professor of Operations Research, and the faculty director of the Master of Business Analytics program at MIT, where he has been a faculty member since 1988. His research focuses on optimization, machine learning, and applied probability, with applications in healthcare, finance, operations management, and transportation. He has authored over 350 scientific papers and eight graduate-level textbooks.


Lecture: The R.O.A.D. to precision medicine
We propose a novel framework that addresses the deficiencies of Randomized clinical trial data subgroup analysis while it transforms ObservAtional Data to be used as if they were randomized, thus paving the road for precision medicine. Our approach counters the effects of unobserved confounding in observational data through a two-step process, adjusting the predicted outcomes under treatment. In both steps optimization plays a privileged role.
These adjusted predictions are used to train decision trees, optimizing treatment assignments for patient subgroups based on their characteristics, enabling intuitive treatment recommendations. We implemented this framework to observational data from patients with gastrointestinal stromal tumors (GIST). The tree recommendations surpassed the performance of current clinical guidelines when evaluated against an external cohort. Furthermore, we extended the application of this framework to RCT data from patients with extremity sarcomas. Despite initial trial indications of universal treatment necessity, our framework identified a subset of patients who may not require treatment. Once again, we successfully validated our recommendations in an external cohort.
(Joint work with Angelos Koulouras, MIT and Antonis Margonis, MIT)

Francis Bach is a researcher at Inria, leading since 2011 the machine learning team which is part of the Computer Science department at Ecole Normale Supérieure. He graduated from Ecole Polytechnique in 1997 and completed his Ph.D. in Computer Science at U.C. Berkeley in 2005, working with Professor Michael Jordan. He spent two years in the Mathematical Morphology group at Ecole des Mines de Paris; then he joined the computer vision project-team at Inria/Ecole Normale Supérieure/CNRS from 2007 to 2010.

Francis Bach is primarily interested in machine learning, and especially in sparse methods, kernel-based learning, neural networks, and large-scale optimization. He published the book "Learning Theory from First Principles" through MIT Press in 2024.

He obtained in 2009 a Starting Grant and in 2016 a Consolidator Grant from the European Research Council, and received the Inria young researcher prize in 2012, the ICML test-of-time award in 2014 and 2019, the NeurIPS test-of-time award in 2021, as well as the Lagrange prize in continuous optimization in 2018, and the Jean-Jacques Moreau prize in 2019. He was elected in 2020 at the French Academy of Sciences. In 2015, he was program co-chair of the International Conference in Machine learning (ICML), general chair in 2018, and president of its board between 2021 and 2023; he was co-editor-in-chief of the Journal of Machine Learning Research between 2018 and 2023.

Lecture: Denoising diffusion models without diffusions
Denoising diffusion models have enabled remarkable advances in generative modeling across various domains. These methods rely on a two-step process: first, sampling a noisy version of the data—an easier computational task—and then denoising it, either in a single step or through a sequential procedure. Both stages hinge on the same key component: the score function, which is closely tied to the optimal denoiser mapping noisy inputs back to clean data. In this talk, I will introduce an alternative perspective on denoising-based sampling that bypasses the need for continuous-time diffusion processes. This framework not only offers a fresh conceptual angle but also naturally extends to discrete settings, such as binary data. Joint work with Saeed Saremi and Ji-Won Park (https://arxiv.org/abs/2305.19473, https://arxiv.org/abs/2502.00557).

Marleen Balvert is a researcher at Tilburg University specializing in optimization and machine learning. She holds an MSc in Operations Research & Management Science (2011-2012, cum laude) and a BSc in Econometrics & Operations Research (2007-2011, with distinction), both from Tilburg University. Her research includes analyzing large genomics datasets and developing tools for the Zero Hunger Lab to support global efforts toward zero hunger.

Lecture: Uncovering the genetic causes of diseases using machine learning and optimization
Many of the diseases that we cannot cure today, such as cancer, Alzheimer's or amyotrophic lateral sclerosis (ALS), are caused by genetic characteristics. Knowing which genetic traits were causative of disease would improve our understanding of the disease and would steer the development of treatments. Therefore, research groups across the world have collected immense amounts of data, containing the genetic information of thousands, sometimes even over a hundred thousand individuals. This data comprises millions of genetic features of both patients of the studied disease and healthy control individuals. Analyzing this data holds the promise of uncovering the disease-causing genetic traits. Over the past decades, several disease-causing genetic characteristics have been identified using a single-feature statistical analysis, leading to understanding the cause of disease for some. In several cases this even lead to successful treatment development. Nevertheless, for most patients, the genetic underpinnings of a disease remain unknown. This is because often a combination of genetic traits rather than an individual trait leads to disease. We therefore need more complex methods to analyze the data. This talk provides an overview of the work that our group has done on the development of machine learning methods for identifying ALS-causing genetic variants.

Jannis Kurtz is an assistant Professor in Mathematical Methods
and Applications in Data Science at the Faculty of Economics and Business – Section Business Analytics – at the University of Amsterdam. His research covers the development of theory and algorithms for optimization and machine learning models and their applications.

Lecture: Deep Learning Enhanced Robust and Bilevel Optimization
Solving robust and bilevel optimization problems can be computationally very demanding, especially if the decision variables are integer. We show in this talk that trained neural networks can be used to support classical solution methods, leading to significantly faster methods which achieve high quality solutions. For two-stage robust optimization problems we train hand-crafted neural networks to predict the optimal value of the second-stage problem. These networks can be incorporated into the classical column-and-constraint algorithm leading to significant speed-ups. For bilevel optimization we train neural networks to predict the optimal value of the follower. These networks can be used to derive tractable formulations of the so-called optimal-value function reformulation of the bilevel optimization problem.

Programme Details

  • 09:30–10:00 Registration & Coffee/Tea
  • 10:00–10:10 Welcome by Ilker Birbil
  • 10:10–11:00 Jannis Kurtz– Deep Learning Enhanced Robust and Bilevel Optimization
  • 11:00–11:30 Coffee/Tea
  • 11:30–12:00 Ward Romeijnders – Decomposition algorithms for stochastic mixed-integer programs
  • 12:00–12:30 Ahmadreza Marandi – Robust Spare Part Inventory Management
  • 12:30–13:30 Group Photo + Lunch
  • 13:30–13:45 Christopher Hojny – Verifying Message-Passing Neural Networks Via Topology-Based Bounds Tightening
  • 13:45–14:00 Francesco Giliberto – Optimal Information Relaxation Bounds for Multi-Stage Stochastic Optimization
  • 14:00–14:15 Daan Otto – Coherent Local Explanations for Mathematical Optimization
  • 14:15–15:00 Coffee/Tea
  • 15:00–16:00 Dimitris Bertsimas (online) – The R.O.A.D. to precision medicine
  • 16:00–17:30 Reception at CWI Forum ('Dutch appetizers and drinks')
  • 09:50–10:00 Coffee/Tea
  • 10:00–11:00 Andrea Lodi – The differentiable Feasibility Pump
  • 11:00–11:30 Coffee/Tea
  • 11:30–12:00 Joaquim Gromicho – Predicting Willingness to Vaccinate While Optimizing Mobile Clinic Routes in Kenya
  • 12:00–12:15 Steven Kelk - AI and proofs - things are moving quickly
  • 12:15–12:30 Yukihiro Murakami – Open problem: Learning methods for graph burning
  • 12:30–13:30 Lunch
  • 13:30–15:00 Open problems / discussion / collaboration
  • 15:00–15:30 Coffee/Tea
  • 15:30–16:30 Çağıl Koçyiğit – Integrating Machine Learning and Optimization for Efficient, Fair, and Interpretable Allocation of Scarce Resources
  • 19:00–22:30 Social dinner - De Kop van Oost
  • 09:50–10:00 Coffee/Tea
  • 10:00–11:00 Francis Bach – Denoising diffusion models without diffusions
  • 11:00–11:30 Coffee/Tea
  • 11:30–12:00 Leo van Iersel – Reconstructing phylogenetic networks by combining cherry picking and machine learning
  • 12:00–12:30 Lennart Kauther – On the Approximability of Train Routing and the Min-Max Disjoint Paths Problem
  • 12:30–13:30 Lunch
  • 13:30–15:00 Open problems / discussion / collaboration
  • 15:00–15:30 Coffee/Tea
  • 15:30–16:30 Bart van Parys – Optimal Decisions with Messy Data and Uncertainty
  • 09:50–10:00 Coffee/Tea
  • 10:00–11:00 Marleen Balvert – Uncovering the genetic causes of diseases using machine learning and optimization
  • 11:00–11:30 Coffee/Tea
  • 11:30–12:00 Ilker Birbil – Towards Explainable Optimization: Current Research and Open Problems
  • 12:00–12:30 Lunch takeaway (packed)

Participants & Organizers

  • Adia Lumadjeng (UvA)
  • Afrasiab Kadhum (ORTEC)
  • Ahmadreza Marandi (Eindhoven University of Technology)
  • Alessandro Zocca (VU Amsterdam)
  • Ali Nikseresht (RSM, Erasmus University)
  • Arnoud den Boer (UvA)
  • Casper Loman (Utrecht University)
  • Charu Agrawal (TU Delft)
  • Christopher Hojny (TU Eindhoven)
  • Daan Otto (UvA)
  • Francesco Giliberto (VU Amsterdam)
  • Gao Peng (CWI)
  • Gaurav Rattan (Twente University)
  • Giulia Bernardini (Trieste)
  • Gloria Pietropolli (University of Trieste)
  • Habtemariam Aberie Kefale (TU Delft)
  • Jannis Kurtz (UvA)
  • Jehum Cho (Erasmus School of Economics, Erasmus University Rotterdam)
  • Jens Schlöter (CWI)
  • Joaquim Gromicho (ORTEC and UvA)
  • Lara Scavuzzo (TU Delft)
  • Lennart Kauther (RWTH Aachen)
  • Lucas Vogels (UvA)
  • Marius Captari (UvA)
  • Marjan van den Akker (Utrecht University)
  • Maryam Karimi-Mamaghan (Assistant Professor, School of Business and Economics, Vrije Universiteit Amsterdam)
  • Mehmet Hakan Akyüz (Middle East Technical University)
  • Mohammad Boveiri (TU Delft)
  • Oussama Azizi (TU Delft)
  • Philip de Bruin (Utrecht University)
  • Pieter Kleer (Tilburg University)
  • Rajath Rao (VU Amsterdam)
  • Sophie Huiberts (CNRS)
  • Stelios Nikolakakis (PhD candidate)
  • Steven Kelk (Maastricht University)
  • Tabea Röber (UvA)
  • Ward Romeijnders (Groningen)
  • Wouter Kool (ORTEC)
  • Wouter Koolen (CWI)
  • Yaniv Oren (TU Delft)
  • Yasamin Nazari (VU & CWI)
  • Yukihiro Murakami (TU Delft)
  • Karen Aardal (TU Delft)
  • Ilker Birbil (UvA)
  • Dick den Hertog (UvA)
  • Leo van Iersel (TU Delft)
  • Etienne de Klerk (Tilburg University)
  • Guido Schäfer (CWI)
  • Leen Stougie (CWI)
  • Pascal Carlier (CWI – assistant)

Lightning & Contributed Talks / 5 Min Open Problem Pitch

MONDAY 3 NOVEMBER

Title:
Deep Learning Enhanced Robust and Bilevel Optimization

Abstract:
Solving robust and bilevel optimization problems can be computationally very demanding, especially if the decision variables are integer. We show in this talk that trained neural networks can be used to support classical solution methods, leading to significantly faster methods which achieve high quality solutions. For two-stage robust optimization problems we train hand-crafted neural networks to predict the optimal value of the second-stage problem. These networks can be incorporated into the classical column-and-constraint algorithm leading to significant speed-ups. For bilevel optimization we train neural networks to predict the optimal value of the follower. These networks can be used to derive tractable formulations of the so-called optimal-value function reformulation of the bilevel optimization problem.

Title:
Decomposition algorithms for stochastic mixed-integer programs

Abstract:
We review primal and dual decomposition algorithms for stochastic mixed-integer programs. This includes a Benders' decomposition algorithm, in which we iteratively construct tighter lower bounds of the expected cost functions using so-called scaled optimality cuts. We derive these cuts by parametrically solving extended formulations of the second-stage problems using deterministic mixed-integer programming techniques. The advantage of these scaled cuts is that they allow for parametric non-linear feasibility cuts in the second stage, but that the optimality cuts in the master problem remain linear. We establish convergence by proving that the optimality cuts recover the convex envelope of the expected second-stage cost function.

Title:
Robust Spare Part Inventory Management

Abstract:
We focus on the spare parts inventory control under demand uncertainty, particularly during the New Product Introduction (NPI) phase when historical data is limited. Most conventional spare parts inventory control models assume demand follows a Poisson process with a known rate. However, the rate may not be known when limited data is available. We propose an adaptive robust optimization (ARO) approach to multi-item spare parts inventory control. We show how the ARO problem can be reformulated as a deterministic integer programming problem. We develop an efficient algorithm to obtain near-optimal solutions for thousands of items. We demonstrate the practical value of our model through a case study at ASML, a leading semiconductor equipment supplier. The case study reveals that our model consistently achieves higher service levels at lower costs than the conventional stochastic optimization approach employed at ASML.

TUESDAY 4 NOVEMBER

Title:
Predicting Willingness to Vaccinate While Optimizing Mobile Clinic Routes in Kenya

Abstract:
Kenya presents major challenges in terms of access to healthcare. While urban areas such as Nairobi are relatively well served, remote regions have limited or no access to health centers. Mobile clinics are therefore essential both to respond during outbreaks and to deliver preventive vaccination campaigns.

This work presents a decision-support tool for optimizing the routing of mobile clinics, using COVID-19 vaccination as a use case. A major challenge lies in the scarcity and unreliability of data: it is often unclear who has already been vaccinated and who is likely to accept vaccination. At the same time, vaccines such as those against COVID-19 require strict cold-storage as per WHO guidelines, making the logistics of vaccination campaigns complex and resource-intensive.

We address these challenges through an integrated framework that combines predictive modelling and optimization. Our method leverages the machine learning extension to Gurobi—an evolution of OptiCL, the open-source framework created by Donato Maragno and Holly Wiberg (https://github.com/hwiberg/OptiCL), to incorporate learned constraints that estimate demand and willingness to vaccinate. These are integrated into a routing model with lazily generated subtour elimination constraints, enabling efficient and adaptive planning of mobile clinic operations.
(Main researcher: Mayukh Ghosh | Team: Mayukh Ghosh, Chintan Amrit, Joaquim Gromicho)


WEDNESDAY 5 NOVEMBER

Title:
Reconstructing phylogenetic networks by combining cherry picking and machine learning

Abstract:

Title:
On the Approximability of Train Routing and the Min-Max Disjoint Paths Problem

Abstract:
In train routing, the headway is the minimum distance that must be maintained between successive trains for safety and robustness. We introduce a model for train routing that requires a fixed headway to be maintained between trains, and study the problem of minimizing the makespan, i.e., the arrival time of the last train, in a single-source single-sink network. For this problem, we first show that there exists an optimal solution where trains move in convoys---that is, the optimal paths for any two trains are either the same or are arc-disjoint.

Via this insight, we are able to reduce the approximability of our train routing problem to that of the Min-max Disjoint Paths problem, which asks for a collection of disjoint paths where the maximum length of any path in the collection is as small as possible.

While Min-max Disjoint Paths inherits a strong inapproximability result on directed acyclic graphs from the Multi-level Bottleneck Assignment problem, we show that a natural greedy composition approach yields a logarithmic approximation in the number of disjoint paths for series-parallel graphs.

We also present an alternative analysis of this approach that yields a guarantee depending on how often the decomposition tree of the series-parallel graph alternates between series and parallel compositions on any root-leaf path.

MONDAY 3 NOVEMBER

Title:
Verifying Message-Passing Neural Networks Via Topology-Based Bounds Tightening

Abstract:
There is an increasing interest in embedding trained neural networks (NN) into mixed-integer programs (MIPs) to express complicated functional relations, e.g., properties of a molecule derived from its structure. The state-of-the-art MIP models for trained NNs, however, are notoriously hard to solve because variable bounds are either very weak or it is too time consuming to compute stronger bounds. In this presentation, I will focus on the verification problem of trained graphical neural networks, which is modeled as a MIP. For this application, I will demonstrate how to exploit the structure of the underlying problem to efficiently find stronger variable bounds and how to embed them dynamically into branch-and-bound. Numerical experiments indicate that the strengthened variable bounds allow to solve the verification problem significantly faster.

(This is joint work with Shiqiang Zhang, Juan S. Campos, and Ruth Misener)

Title:
Optimal Information Relaxation Bounds for Multi-Stage Stochastic Optimization

(Francesco Giliberto, David Wozabal, Rosario Paradiso. September 25, 2025)

Abstract:
This paper addresses the computation of tight optimistic bounds for multi-stage stochastic optimization problems using information relaxation duality. We introduce a specific class of penalty functions—bilinear in decisions and the innovations of the underlying stochastic process—to penalize anticipative policies. Our approach provides a generic framework for deriving such bounds, notably without requiring explicit knowledge or approximation of the problem’s value functions. We formulate a minimax problem to find the optimal penalty parameters within this specific class, yielding the tightest bound achievable with these penalties. We show that for convex problems, this mini-max problem can be equivalently reformulated as a standard stochastic program with expectation constraints. Furthermore, we propose an iterative algorithm to solve the minimax problem directly.
The methodology offers a computationally tractable approach to generate bounds that are stronger than simple perfect information relaxations, thereby improving the evaluation of heuristic policies.

Title:
Coherent Local Explanations for Mathematical Optimization

Abstract:
The surge of explainable artificial intelligence methods seeks to enhance transparency and explainability in machine learning models. At the same time, there is a growing demand for explaining decisions taken through complex algorithms used in mathematical optimization. However, current explanation methods do not take into account the structure of the underlying optimization problem, leading to unreliable outcomes. In response to this need, we introduce Coherent Local Explanations for Mathematical Optimization (CLEMO). CLEMO provides explanations for multiple components of optimization models, the objective value and decision variables, which are coherent with the underlying model structure. Our sampling-based procedure can provide explanations for the behavior of exact and heuristic solution algorithms. The effectiveness of CLEMO is illustrated by experiments for the shortest path problem, the knapsack problem, and the vehicle routing problem using parametric regression models as explanations.

TUESDAY 4 NOVEMBER

Title:
AI and proofs - things are moving quickly

Abstract:

TUESDAY 4 NOVEMBER

Yukihiro Murakami

Title:

Learning methods for graph burning

.....................................................................................................................

Registration

Please note that participation in this workshop is by invitation only!

Participation is free of charge, but registration is required.

The registration is now closed.

Accommodation and venue

Please be aware that hotel prices in Amsterdam can be quite steep. We strongly recommend all participants to secure their hotel reservations as early as possible!

Hotel recommendations

From these hotels, the venue can be reached in 15-30 minutes with public transport. In all public transportation, you can check in and out with a Mastercard or Visa contactless credit card and also with Apple Pay and Google Wallet.

Venue

The conference will be held in the Turing Hall at the Congress Centre of Amsterdam Science Park, next to Centrum Wiskunde & Informatica (CWI).

Address: Science Park 125, 1098 XG Amsterdam

See here for location in Google Maps.

CWI research semester programme

This workshop is part of the broader CWI research semester programme on learning enhanced optimization, contributing to its overarching mission of advancing cutting-edge research in theoretical computer science, operations research and beyond.

See here for more information about the whole research semester programme

Financial support

We gratefully acknowledge the financial support of CWI and OPTIMAL whose contributions helped to make this event possible.

CWI_logo_RGB_rood_1200px

Optimal uva

Poster of the RSP- Learning Enhanced Optimization