Learning to sample: Practical Variational Bayesian Inference - Tristan van Leeuwen
The ability to simulate large-scale complex systems is of the success stories of science of the past decades. It allows us to study the effects of a given cause. In many applications, the inverse problem of inferring the cause of an observed effect is also of interest. The Bayesian framework gives us a principled way to cast this task in terms of prior assumptions on the underlying physics, and parameters that we want to infer. It consists of three main tasks; modelling (formulating prior and likelihood), sampling (sampling from the resulting posterior distribution), and analysis (computing summary statistics and interpreting the results). While the underlying mathematics is well-understood, and powerful (Monte Carlo) sampling algorithms are available, it remains a challenge for high-dimensional problems and cases where we can easily derive the required prior and likelihood distributions. In this talk I will review how generative models can be used to tackle modelling and sampling in a unified way, given that example data (e.g., from simulations) are available.