Boot Camp - Machine Learning Theory (Semester Programme)

The boot camp kicks off our Machine Learning Theory semester programme. The ML Theory semester programme runs in Spring 2023.

14 Feb 2023 from 9:30 a.m. to 15 Feb 2023 4:30 p.m. CET (GMT+0100)
Amsterdam Science Park Congress Center, Euler room, contact information

The boot camp kicks off our Machine Learning Theory semester programme.

The ML Theory semester programme runs in Spring 2023.

This two-day boot camp is intended for PhD students in ML theory. We will have one and half days of tutorials by researchers, one afternoon of lectures by international keynote speakers, a poster session, a joint dinner, and plenty of time for interaction.

Participation is free, but registration is mandatory since the workshop has a limited capacity. Registration for the event is now open.

Schedule Boot Camp Research Semester Programme Machine Learning Theory
14 February 15 February
09:30 Johannes Schmidt-Hieber 09:15 Tim van Erven
10:30 Coffee break 10:30 Coffee break
10:45 Frans Oliehoek 10:45 Rui Castro
11:45 Break 11:45 Break
12:00 Gabriele Cesa 12:00 Jaron Sanders
13:00 Lunch + Posters 13:00 Lunch
14:00 Bob Williamson 14:00 Ronald de Wolf
15:00 Coffee break 15:00 Coffee break
15:30 Emilie Kaufmann 15:30 Mathias Staudigl
16:30 Reception -
18:00 Dinner -

Confirmed speakers

Tim van Erven
Associate professor at the Korteweg-de Vries Institute for Mathematics at the University of Amsterdam in the Netherlands.

Formal Results in Explainable Machine Learning

Since most machine learning systems are not inherently interpretable, explainable machine learning tries to generate explanations that communicate relevant aspects of their internal workings. This is a relatively young subfield, which is generating a lot of excitement, but it is proving very difficult to lay down proper foundations: What is a good explanation? When can we trust explanations? Most of the work in this area is based on empirical evaluation, but recently the first formal mathematical results have started to appear. In this tutorial, I will introduce the topic, and then highlight several formal results of interest.

Johannes Schmidt-Hieber
Professor of statistics at the University of Twente.

The current state in the development of a statistical foundation for deep neural networks

Recently a lot of progress has been made regarding the theoretical understanding for deep neural networks. One of the very promising directions is the statistical approach, which interprets deep learning as a statistical method and builds on existing techniques in mathematical statistics to derive theoretical error bounds and to understand phenomena such as overparametrization. The talk surveys this field and describes future challenges.

Jaron Sanders

Development Track Assistent Professor at the Eindhoven University of Technology.

Detecting Clusters in Time-Series

Motivated by theoretical advancements in dimensionality reduction techniques we have used a recent model called Block Markov Chains (BMCs), to conduct a practical study of clustering in real-world sequential data. New clustering algorithms for BMCs namely possess theoretical optimality guarantees and can be deployed in sparse data regimes. We ultimately found that the BMC model assumption can indeed produce meaningful insights in exploratory data analyses despite the complexity and sparsity of real-world data.

Based on the study mentioned above, I will introduce you to the idea of dimensionality reduction via clustering; to methods to determine clusters and clusters in time series in particular; to theoretical properties of clustering that we can compare with; and to tools that can help you evaluate clusters that you may find in data. I also point you to an efficient implementation of our clustering algorithm and the evaluation tools for BMCs that we have made available. All in all, my talk might just help you discover hidden latent spaces in your own time series of interest.

Ronald de Wolf
Researcher at the Algorithms and Complexity group of CWI (Dutch Centre for Mathematics and Computer Science) and part-time full professor at the ILLC of the University of Amsterdam.

Tutorial on Quantum Machine Learning

Machine learning can be enhanced by quantum computing, both by allowing quantum data and by having quantum speed-ups for the optimization process that finds a good model for given data. This tutorial will give an introduction to quantum computing (what are quantum algorithms? what do we know about them?) and then examine how they can help machine learning.

Rui Castro
Associate professor of statistics, and a member of the Statistics, Probability and Operations Research (SPOR) cluster of the Mathematics Department at TU Eindhoven.

Anomaly detection for a large number of streams: a permutation/rank-based higher criticism approach

Anomaly detection when observing a large number of data streams is essential in a variety of applications, ranging from epidemiological studies to monitoring of complex systems. High-dimensional scenarios are usually tackled with scan-statistics and related methods, requiring stringent modeling assumptions for proper test calibration.  In this tutorial we discuss ways to drop these stringent assumptions, while still ensuring essentially optimal performance.  We take a non-parametric stance, and introduce two variants of the higher criticism test that do not require knowledge of the null distribution for proper calibration.  In the first variant we calibrate the test by permutation, while in the second variant we use a rank-based approach.  Both methodologies result in exact tests in finite samples, and showcase the analytical tools needed for the study of these type of resampling approaches.  Our permutation methodology is applicable when observations within null streams are independent and identically distributed, and we show this methodology is asymptotically optimal in the wide class of exponential models.  Our rank-based methodology is more flexible, and only requires observations within null streams to be independent.  We provide an asymptotic characterization of the power of the test in terms of the probability of mis-ranking null observations, showing that the asymptotic power loss (relative to an oracle test) is minimal for many common models.  As the proposed statistics do not rely on asymptotic approximations, they typically perform better than popular variants of higher criticism relying on such approximations.  We demonstrate the use of these methodologies when monitoring the daily number of COVID-19 cases in the Netherlands.

(Based on joint works with Ivo Stoepker, Ery Arias-Castro and Edwin van de den Heuvel.)

Matthias Staudigl
Associate Professor for Multi-Agent Optimization at the Department of Advanced Computing Sciences (DACS) at Maastricht University.

Learning and Games 

Game theory is a powerful methodological tool to mathematically formalize and study strategic optimization problems between self-interested agents. Historically, game theoretic models played a fundamental role in economics and operations research as a qualitative model for economic decision making. However, given its intimate connection with the theory of variational inequalities, a more quantitative line of research quickly emerged in order to numerically compute equilibria in large games. More recently, game theory plays a significant role in machine learning and AI in order to generate robust predictions (in the sense of theory of Min-Max) and deep learning architectures (GANs). In the first part of this lecture, I am going to summarize a unified approach for learning in games based on regularization techniques and variational analysis. We will stress the connection between learning algorithms and dynamical system methods. Also recent connection with non-stationary regret measures will be discussed. The second part will focus on splitting-based algorithms that have been designed for convergence to a class of Nash equilibrium points in more general settings where players' decisions are subjected to coupling constraints. 

Gabriele Cesa
PhD student at the Amsterdam Machine Learning Lab (AMLab) with Max Welling and a Research Associate at Qualcomm AI Research with Taco Cohen and Arash Behboodi.

Equivariant Deep Learning

In deep learning and computer vision, it is common for data to present some symmetries. For instance, histopathological scans and satellite images can appear in any rotation. Examples in 3D include protein structures (which have arbitrary orientation) or natural scenes (where objects can freely rotate around their Z axis). Equivariance is becoming an increasingly popular design choice to build data efficient neural networks by exploiting prior knowledge about the symmetries of the problem at hand. 

In this tutorial, we will cover the mathematical foundations of group equivariant neural networks. In addition, we will introduce Steerable CNNs as a general and efficient framework to implement equivariant networks by relying on tools from group representation theory.

Frans Oliehoek
Associate Professor at the Interactive Intelligence group at TU Delft.

Reinforcement Learning: State of the Art & Challenges

In recent years, we have seen exciting breakthroughs in the field of 'reinforcement learning'. In this talk, I will give a very basic introduction to this general field, where I put a focus on clarifying some of the terminology. With this I will cover some of the foundations of RL, the intuition behind the state of the art, and an overview of some the main challenges for the future.