Focus of the program
Inverse problems are ubiquitous in science and engineering. Traditionally, they are formulated using a (physics-based) forward model that simulates the relation between parameters and observations. Typically, these observations are incomplete and noisy. Moreover, the model is often over-parametrized and thus not all parameters can be inferred from the observations alone. Taking into account the nature of the measurement process and prior information on the parameters leads to a regularized data-fitting problem. Solving this variational problem gives an estimate of the parameters. A Bayesian interpretation of this formulation leads to a posterior distribution on the parameters, allowing one to define uncertainties of the estimates as well.
This classical approach to solving inverse problems has been hugely successful. One challenge in practical applications is to formulate appropriate prior assumptions. While handcrafted priors (e.g., Total Variation) often suffice, more accurate results can be expected with more accurate prior assumptions. Recently, a number of data-driven approaches have been proposed to extract more accurate prior models from examples. These methods have achieved some successes, and efforts to understand them and make them more computationally efficient are underway. Beyond using examples for building prior models, methods for estimating the posterior from examples have been proposed as well. While conceptually attractive, these methods are unaware of the underlying physics and need massive amounts of training data. So far, their use has been restricted to small-scale problems.
Some of the main challenges include:
- Model selection (how to parametrize the models and include knowledge of the underlying physics)
- Small data (a large database of examples may not be available)
- Analysis (guarantees on convergence, reconstruction, generalization, robustness)
- Uncertainty quantification (generate samples from the posterior)
- Computational efficiency (for training, inference and sampling)
- Explainability (can we explain features in the outcome and trace them back to the prior or features in the data)
- Real world application (out-of-distribution samples, calibration, ...)
Progress along these lines can only be made by combining mathematical analysis, inverse problems, scientific computing and data-science. In particular, the developments in scientific machine learning and physics-based machine learning are very relevant.
The program -hosted by CWI at Amsterdam Science Park- consists of the following events (subject to change):
- A mini-symposium on 19 May 2022, celebrating the 5-year anniversary of the Flex-ray lab, showcasing collaborations and recent developments
- A masterclass on 12 & 13 May 2022 on Bayesian statistics and machine learning, aimed at PhD students who want to learn more about the underlying theory
- A data-challenge / hackaton
- Networking events, with an invited speaker and drinks/dinner afterwards
- A minisymposium with invited international speakers (2-day event with lectures and plenty of time for informal discussions).