Peter Grünwald heads the machine learning group at CWI in Amsterdam, the Netherlands. He is also full professor of Statistical Learning at the mathematical institute of Leiden University. He is currently President of the Association for Computational Learning, the organization running COLT, the world’s prime annual conference on machine learning theory. He was co-program chair of COLT in 2015 and also chaired UAI – another top ML conference – in 2010/2011. Apart from publishing at ML venues like NIPS, COLT and UAI, he also regularly contributes to top statistics journals such as the Annals of Statistics. He is the author of the book The Minimum Description Length Principle, (MIT Press, 2007; see here for an up-to-date (2020), much shorter introduction), which has become the standard reference for the MDL approach to learning from data. In 2010 he was co-awarded the Van Dantzig prize, the highest Dutch award in statistics and operations research. He received NWO VIDI (2005), VICI (2010) and TOP-1 (2016) grants.
My current research interests mostly focus on what I call Safe Learning, Safe Statistics and Safe Probability. The basic idea is to make sure that inference from data is done in - indeed - a safer way. The 'replicability crisis' in the applied sciences provides ample evidence that we very often jump to conclusions which simply aren't justified. The goal of much of my research is to improve this situation! Currently, I am mostly working on:
- Safe Bayesian Inference: Reparing Bayesian inference under misspecification (when the model is wrong, but useful)
- Safe Testing: Hypothesis Testing and Model Choice under Optional Stopping and Optional Continuation
- Safe Probability: working with probability distributions that only capture parts, not all of your domain of interest.
- Learning Bounds: quantifying how many data are needed to reach conclusions of a desired quality in machine learning and sequential prediction, with generalized Bayesian methods, PAC-Bayesian methods and MDL methods.