Seminar for machine learning and UQ in scientific computing

Seminar on machine learning and uncertainty quantification for scientific computing, organised by the scientific computing group at CWI.

The focus of this seminar is application of Machine Learning (ML)  and Uncertainty Quantification in scientific computing. Topics of interest include, among others: 

  • combination of data-driven models and (multi scale) simulations
  • new ML architectures suited for scientific computing or UQ,   
  • incorporation of (physical) constraints into data-driven models,   
  • efficient (online) learning strategies,   
  • using ML for dimension reduction / creating surrogates,   
  • inverse problems using ML surrogates, 

and any other topic in which some form of ML and/or UQ is used to enhance (existing) scientific computing methodologies. All applications are welcome, be it financial, physical, biological or otherwise.

For more information, or if you'd like to attend one of the talks, please contact Wouter Edeling of the SC group.

Schedule upcoming talks:

13 May 2021 16h00: John Harlim (Penn state): Machine learning of missing dynamical systems

In the talk, I will discuss a general closure framework to compensate for the model error arising from missing dynamical systems. The
proposed framework reformulates the model error problem into a supervised learning task to estimate a very high-dimensional closure model, deduced from the Mori-Zwanzig representation of a projected dynamical system with projection operator chosen based on Takens embedding theory. Besides theoretical convergence, this connection provides a systematic framework for closure modeling using available machine learning algorithms. I will demonstrate numerical results using a kernel-based linear estimator as well as neural network-based nonlinear estimators. If time permits, I will also discuss error bounds and mathematical conditions that allow for the estimated model to reproduce the underlying stationary statistics, such as one-point statistical moments and auto-correlation functions, in the context of learning Ito diffusions.

10 Jun. 2021: Hannah Christensen (Oxford)

17 June 2021 15h00 CET: Bruno Sudret (ETH Zürich, chair Risk Safety and Uncertainty Quantification): Surrogate modelling approaches for stochastic

Computational models, a.k.a. simulators, are used in all fields of engineering and applied sciences to help design and assess complex systems in silico. Advanced analyses such as optimization or uncertainty quantification, which require repeated runs by varying input parameters, cannot be carried out with brute force methods such as Monte Carlo simulation due to computational costs. Thus the recent development of surrogate models such as polynomial chaos expansions and Gaussian processes, among others. For so-called stochastic simulators used e.g. in epidemiology, mathematical finance or wind turbine design, an intrinsic source of stochasticity exists on top of well-identified system parameters. As a consequence, for a given vector of inputs, repeated runs of the simulator (called replications) will provide different results, as opposed to the case of deterministic simulators. Consequently, for each single input, the response is a random variable to be characterized.
In this talk we present an overview of the literature devoted to building surrogate models of such simulators, which we call stochastic emulators. Then we focus on a recent approach based on generalized lambda distributions and polynomial chaos expansions. The approach can be used
with or without replications, which brings efficiency and versatility. As an outlook, practical applications to sensitivity analysis will also be presented.
Acknowledgments: This work is carried out together with Xujia Zhu, a PhD. student supported by the Swiss National Science Foundation under Grant Number #175524 “SurrogAte Modelling for stOchastic Simulators (SAMOS)”.

24 Jun. 2021: Federica Gugole (CWI)

Previous talks

29 Apr. 2021 16h15: Nathaniel Trask (Sandia): Structure preserving deep learning architectures for convergent and stable data-driven modeling

The unique approximation properties of deep architectures have attracted attention in recent years as a foundation for data-driven modeling in scientific machine learning (SciML) applications. The "black-box" nature of DNNs however require large amounts of data that generalize poorly in traditional engineering settings where available data is relatively small, and it is generally difficult to provide a priori guarantees about the accuracy and stability of extracted models. We adopt the perspective that tools from mimetic discretization of PDEs may be adapted to SciML settings, developing architectures and fast optimizers tailored to the specific needs of SciML. In particular, we focus on: realizing convergence competitive with FEM, preserving topological structure fundamental to conservation and multiphysics, and providing stability guarantees. In this talk we introduce some motivating applications at Sandia spanning shock magnetohydrodynamics and semiconductor physics before providing an overview of the mathematics underpinning these efforts.

 25 Mar. 2021 15h00: Jurriaan Buist (CWI): Energy conservation for the one-dimensional two-fluid model for two-phase pipe flow

The one-dimensional two-fluid model (TFM) is a simplified model for multiphase flow in pipes. It is derived from a spatial averaging process, which introduces a closure problem concerning the wall and interface friction terms, similar to the closure problem in turbulence. To tackle this closure problem, we have approximated the friction terms by neural networks trained on data from DNS simulations.

Besides the closure problem, the two-fluid model has a long-standing stability issue: it is only conditionally well-posed. In order to tackle
this issue, we analyze the underlying structure of the TFM in terms of the energy behavior. We show the new result that energy is an inherent
'secondary' conserved property of the mass and momentum conservation equations of the model. Furthermore, we develop a new spatial
discretization that exactly conserves this energy in simulations.

The importance of structure preservation, and the core of our analysis, is not limited to the TFM. Neural networks that approximate physical
systems can also be designed to preserve the underlying structure of a PDE. In this way, physics-informed machine learning can yield more
physical results.

11 Mar. 2021 15h30: Benjamin Sanderse (CWI): Multi-Level Neural Networks for PDEs with Uncertain Parameters

A novel multi-level method for partial differential equations with uncertain parameters is proposed. The principle behind the method is that the error between grid levels in multi-level methods has a spatial structure that is by good approximation independent of the actual grid level. Our method learns this structure by employing a sequence of convolutional neural networks, that are well-suited to automatically detect local error features as latent quantities of the solution. Furthermore, by using the concept of transfer learning, the information of coarse grid levels is reused on fine grid levels in order to minimize the required number of samples on fine levels. The method outperforms state-of-the-art multi-level methods, especially in the case when complex PDEs (such as single-phase and free-surface flow problems) are concerned, or when high accuracy is required.

25 Feb. 2021, 15h00: Maximilien de Zordo-Banliat (Safran Tech, Dynfluid Laboratory): Space-dependent Bayesian model averaging of turbulence models for compressor cascade flows.


Predictions of systems described by multiple alternative models is of importance for many applications in science and engineering, namely when it is not possible to identify a model that significantly outperforms every other model for each criterion. Two mathematical approaches tackling this is-sue are Bayesian Model Averaging (BMA) [1, 2], which builds an average of the concurrent models weighted by their marginal posteriors probabilities, and Stacking [3, 4], where the unknown prediction is projected on a basis of alternative models, with weights to be learned from data. In both approaches, the weights are generally constant throughout the domain. More recently, Yu et al. [5] have proposed the Clustered Bayesian Averaging (CBA) algorithm, which leverages an ensemble of Regression Trees (RT) to infer weights as space-dependent functions. Similarly, we propose a Space-Dependent Stacking (SDS) algorithm which modifies the stacking formalism to include space-dependent weights, based on a spatial decomposition method.

In this work, the above-mentioned methods are investigated in a Computational Fluid Dynamics(CFD) context. Specifically, CFD of engineering systems often relies on Reynolds-Averaged Navier-Stokes (RANS) models to describe the effect of (unresolved) turbulent motions onto the mean (re-solved) field. Since all turbulent motions are modelled, RANS turbulence models tend to be uncertain and case-dependent. Quantifying and reducing such uncertainties is then of the utmost importance for aerodynamics design in general, and specifically for the analysis and optimization of complex turbo machinery flows. In previous work [6], the present authors used Bayesian model averages of RANS models for providing improved predictions of a compressor cascade configuration, alongside with a quantification of confidence intervals associated with modelling uncertainties. Constant weights throughout the field were used. It is however expected, based on theoretical considerations and ex-pert judgment, that different RANS models tend to perform better in some flow regions and less in other regions, and consequently they should be assigned space-varying weights. For this reason, we implement and assess space-dependent averages of RANS models for compressor flow predictions. More precisely the focus is put on two alternative algorithms: (i) a version of CBA adapted to flow variable fields, and (ii) a Space-Dependent Stacking (SDS) method based on Karhunen-Loeve decomposition of the mixture weights. Flow regions are described using selected features, formulated as functions of the mean flow quantities. Given a set of concurrent RANS models and a database of reference flow data corresponding to various operating conditions, the two algorithms are first trained against data, and subsequently applied to the prediction of an unobserved flow, i.e. another operating condition of the compressor cascade. The algorithms assign a probability to each model in each region of the feature space, based on their ability to accurately predict selected Quantities of Interest (QoI) in this region. The space-dependent weighted-average of the RANS models applied to the prediction scenario is used to reconstruct the expected solution and the associated confidence intervals. Preliminary results show that both of the methods generally yield more accurate solutions than the constant-weight BMA method, and provide a valuable estimate of the uncertainty intervals.


[1] David Madigan, Adrian E Raftery, C Volinsky, and J Hoeting. Bayesian model averaging. In Proceedings of the AAAI Workshop on Integrating Multiple Learned Models, Portland, OR, pages 77–83,1996.

[2] Jennifer A. Hoeting, David Madigan, Adrian E. Raftery, and Chris T. Volinsky. Correction to: “