Launch Lecture - Machine Learning Theory (Semester Programme)

This is the first lecture event of the ML Theory semester programme.

14 Feb 2023 from 2 p.m. to 14 Feb 2023 5 p.m. CET (GMT+0100)
Amsterdam Science Park Congress Centre, Turing room

The ML Theory semester programme runs in Spring 2023.

This is the first lecture event of the ML Theory semester programme.

This afternoon two distinguished researchers in ML theory give lectures for researchers.

Registration is closed.

With this "Launch Lecture", we kick off the Machine Learning Theory semester programme, which runs in Spring 2023. We are delighted to host an afternoon with two distinguished researchers speaking about their advances in our understanding of machine learning. These research-level lectures are intended for a non-specialist audience. The lectures are freely accessible.

The general theme of the afternoon is "The Nature of Information and its Active Acquisition". Our two speakers will illuminate philosophical and technical aspects of machine learning. For more details see the abstracts below. We are looking forward to a strong program and we hope to welcome you on the 14th.

This is the first lecture event of the ML Theory semester programme, with a second lecture following on April 12.

Date Time Event
14 February 14:00 Bob Williamson, Foundations of Machine Learning Systems
14 February 15:00 Break
14 February 15:30 Emilie Kaufmann, A Tale of Two Non-parametric Bandit Problems
14 February 16:30 Reception


Bob Williamson (University of Tübingen) is Professor of Foundations of Machine Learning Systems at the University of Tübingen, where he is also a member of the Tübingen AI centre and the Cluster of Excellence (Machine Learning in the Sciences). Previously he was at the ANU and NICTA in Australia. He is a fellow of the Australian Academy of Science.

Foundations of Machine Learning Systems

Abstract: I will present some new insights into some foundational assumptions about machine learning systems, including why we might want to replace the expectation in our definition of generalisation error, why independence is intrinsically relative and how it is intimately related to fairness, why the data we ingest might not even have a probability distribution, and what one might do in such cases, and how we have been (perhaps unwittingly) working with these more exotic notions for some time already.

Emilie Kaufmann (CNRS, Univ. Lille) is a CNRS researcher at the University of Lille, France, and a member of the Inria Scool team. She got a PhD in statistics from Telecom ParisTech in 2014 and since then has studied different types of sequential decision making problems. In particular, she extensively worked on multi-armed bandit problems. Her recent research interests involve the applications of bandits to (sequential) clinical trials, and investigating the sample complexity of more general reinforcement learning problems.

A Tale of Two Non-parametric Bandit Problems

Abstract: In a bandit model an agent sequentially collects samples (rewards) from different distributions, called arms, in order to achieve some objective related to learning or playing the best arms. Depending on the application, different assumptions can be made on these distributions, from Bernoulli (e.g., to model success/failure of a treatment) to complex multi-modal distributions (e.g. to model the yield of a crop in agriculture). In this talk, we will present non-parametric algorithms which adapt optimally to the actual distributions of the arms, assuming that they are bounded. We will first show the robustness of a Non Parametric Thompson Sampling strategy to a risk-averse performance metric. Then, we will discuss how the algorithm can be modified to tackle pure exploration objectives, bringing new insights on so-called Top Two algorithms.