# Bayesian learning from data: challenges, limitations and pragmatics

Publication date: 25-01-2021

De Heide’s dissertation “*Bayesian learning from data: challenges, limitations and pragmatics*” is about Bayesian learning from data. How humans and computers can learn from data is a core question of both statistics and machine learning. Bayesian methods are widely used in these fields, yet they have certain limitations and problems of interpretation. De Heide examines two such limitations, and overcomes them by extending the Bayesian framework in two different ways. Furthermore, she discusses how different philosophical interpretations of Bayesianism affect mathematical definitions and theorems about Bayesian methods and their use in practice. She also applies Bayesian methods in a pragmatic way: merely as tool for interesting learning problems.

**A new theory for hypothesis testing**

An interesting learning problem at the core of applied statistics (such as for medicine and psychology) is hypothesis testing. P-value based null hypothesis significance testing is widely used in these fields, even though the use of p-values has been severely criticised for at least the last 50 years. Some propose a Bayesian revolution (with so-called Bayes factors as replacement of the p-value), yet sometimes overlook limitations of Bayesian methods (these limitations are investigated in this dissertation). De Heide proposes no less than a new theory of hypothesis testing, based on a concept called the e-variable. This is a random variable similar to, but in many cases an improvement of the p-value. E-variables provide a common language to express strength of evidence for adherents of different hypothesis testing philosophies: they can be interpreted in Fisherian, Neyman-Pearsonian and Bayesian ways; evidence from experiments originating from those different paradigms can be freely combined, and some of the main issues with p-values are resolved, such as interpretability problems for practitioners: e-variables allow for a clear interpretation in terms of money or gambling. An optimality criterion for designing e-variables is introduced, called GROW, which stands for Growth-Rate Optimal in Worst-case. When the null hypothesis is not true, a GROW e-variable will provide evidence against it in the fastest possible way (while retaining the nice properties of e-variables).

**Group invariance in statistics**

An intriguing theme that turns up in De Heide’s dissertation is group invariance in hypothesis testing. In hypothesis testing problems one often has to deal with nuisance parameters: parameters that occur in both models (the null and the alternative hypothesis), that are not directly of interest, but that need to be accounted for in the analysis. Special things happen when for such a nuisance parameter there exists a suitable transformation group, which leaves both models invariant. It renders hypothesis tests robust under optional stopping, and it turns out that the Bayes factor with such a group structure (when the corresponding right Haar measure is used) is an e-variable, and moreover, it is the GROW e-variable.

**More information**

*Bayesian learning: challenges, limitations and pragmatic*s which you can virtually follow via https://www.universiteitleiden.nl/wetenschappers/livestream-promotie

Tuesday 26 January 16.15- 17.15 hrs

Promotors: Prof.dr. P.D. Grünwald (CWI, UL) and Prof.dr. J.J. Meulman (UL)

The livestream will start a few moment before the defense begins. After the defense it will turn off for the deliberations of the committee. After 5-10 minutes the video will start again for the conclusions of the committee. However: the livestream does not start by itself, so you have to keep refreshing the page yourself until it starts (again).

- Rianne de Heide's website
- Machine Learning group CWI
- de Heide, R, & Grünwald, P.D. (2020).
*Why optional stopping can be a problem for Bayesians*.*Psychonomic Bulletin and Review*. doi:10.3758/s13423-020-01803-x