Seminar for machine learning and UQ in scientific computing

Seminar on machine learning and uncertainty quantification for scientific computing, organised by the scientific computing group at CWI.

This seminar is organized by the Scientific Computing group of CWI Amsterdam. The focus is on the application of Machine Learning (ML)  and Uncertainty Quantification in scientific computing. Topics of interest include, among others: 

  • combination of data-driven models and (multi scale) simulations
  • new ML architectures suited for scientific computing or UQ,   
  • incorporation of (physical) constraints into data-driven models,   
  • efficient (online) learning strategies,   
  • using ML for dimension reduction / creating surrogates,   
  • inverse problems using ML surrogates, 

and any other topic in which some form of ML and/or UQ is used to enhance (existing) scientific computing methodologies. All applications are welcome, be it financial, physical, biological or otherwise.

For more information, or if you'd like to attend one of the talks, please contact Wouter Edeling of the SC group.

Schedule upcoming talks:

 

 22 September 2022 12h00 CET: Margaux Boxho (Cenaero): Validation of data-driven wall models on the upper and lower walls of the
two-dimensional periodic hill

Large Eddy Simulations (LES) are of increasing interest for turbomachinery design since they provide a more reliable prediction of flow physics and component behaviour than standard RANS simulations. However, they remain prohibitively expensive at high Reynolds numbers or realistic geometries. Most of the cost is associated with the resolution of the boundary layer, and therefore, to save computational resources, wall-modeled LES (wmLES) has become a valuable tool. Most existing analytical wall models assume the boundary layer to be fully turbulent, attached, flow aligned, and near-equilibrium. However, these assumptions no longer hold for complex flow patterns that frequently
occur in turbomachinery passages (i.e., misalignment, separation, …). Although relevant progress has been made in recent years, wall models have not always brought a clear benefit for such realistic flows. This work is a first step in addressing more complex flow features, such as separation, in the development of wall models. This paper proposes an innovative data-driven wall model for the treatment of separated flows.

Among the many possibilities to solve the complex regression problem (i.e., predicting the wall shear stress from instantaneous flow data and the geometry), deep neural networks have been selected for their universal approximation capabilities (K. Hornik. “Approximation capabilities of multilayer feedforward networks,” Neural Networks, 4(2):251–257, 1991). In the present framework, the two-dimensional periodic hill problem is selected as a reference test case featuring the separation of a fully turbulent boundary layer. Gaussian Mixture Neural networks (GMN) and Convolutional Neural Networks (CNN) combined with a self-attention layer (Vaswani et al., “Attention Is All You Need,” 31st Conference on Neural Information Processing Systems, NIPS 2017, Long Beach, CA, USA) are trained to predict the two components of the wall shear stress using instantaneous flow quantities and geometric parameters. The input stencil is obtained by analysing space-time correlations (M. Boxho et al.,
“Analysis of space-time correlations for the two-dimensional periodic hill problem to support the development of wall models,” ETMM13, 2021). Once the model is trained on different databases (flow on the two walls of the periodic hill, flow on a channel at 𝑅𝑒𝜏 = 950), the model is implemented in a high-order Discontinuous Galerkin (DG) flow solver. The a priori and a posteriori validation of the data-driven wall models on the periodic hill problem will be presented. A discussion on the generalizability of the data-driven wall model will be also reported.

 

13 October 2022 16h00 CET: Joseph Bakarji (University of Washington) : Discovering dimensionless groups from data using constrained sparse
identification and deep learning methods

Dimensional analysis is a robust technique for extracting insights and finding symmetries in physical systems, especially when the governing
equations are not known. The Buckingham Pi theorem provides a procedure for finding a set of dimensionless groups from given parameters and variables. However, this set is often non-unique which makes dimensional analysis an art that requires experience with the problem at hand. In this talk, I'll propose a data-driven approach that takes advantage of the symmetric and self-similar structure of available measurement data to discover dimensionless groups that best collapse the data to a lower dimensional space according to an optimal fit. We develop three machine learning methods that use the Buckingham Pi theorem as a constraint: (i) a constrained optimization problem with a nonparametric function, (ii) a deep learning algorithm (BuckiNet) that projects the input parameter space to a lower dimension in the first layer, and (iii) a sparse identification of differential equations method to discover differential equations with dimensionless coefficients that parameterize the dynamics. I discuss the accuracy and robustness of these methods when applied to known nonlinear systems where dimensionless groups are known, and propose a few avenues for future research.
 

Previous talks

1 September 2022 14h00 CET: Stefania Fresca (Politecnico di Milano): Deep learning-based reduced order models for the real-time approximation of parametrized PDEs

Conventional reduced order models (ROMs) anchored to the assumption of modal linear superimposition, such as proper orthogonal decomposition (POD), may reveal inefficient when dealing with nonlinear time-dependent parametrized PDEs, especially for problems featuring coherent structures propagating over time. To enhance ROM efficiency, we propose a nonlinear approach to set ROMs by exploiting deep learning (DL) algorithms, such as convolutional neural networks. In the resulting DL-ROM, both the nonlinear trial manifold and the nonlinear reduced dynamics are learned in a non-intrusive way by relying on DL algorithms trained on a set of full order model (FOM) snapshots, obtained for different parameter values. Performing then a former dimensionality reduction on FOM snapshots through POD enables, when dealing with large-scale FOMs, to speedup training times, and decrease the network complexity, substantially. Accuracy and efficiency of the DL-ROM technique are assessed on different parametrized PDE problems in cardiac electrophysiology, computational mechanics and fluid dynamics, possibly accounting for fluid-structure interaction (FSI) effects, where new queries to the DL-ROM can be computed in real-time. Moreover, numerical results obtained by the application of DL-ROMs to the solution of an industrial application, i.e. the approximation of the structural or the electromechanical behaviour of Micro-Electro-Mechanical Systems (MEMS), will be shown.

15 June 2022 16h00 CET: Cristóbal Bertoglio (Bernoulli Institute - U Groningen):  Inverse problems in blood flow modeling from MRI

Mathematical and computational modeling of the cardiovascular system is increasingly giving alternatives to traditional invasive clinical
procedures, and allowing for richer diagnostic metrics. In blood flows, the personalization of the models -- based on geometrically multi-scale fluid flows and fluid-solid interaction "combined" with medical images -- rely on formulating and solving appropriate inverse problems. In this talk, we will focus on the challenges and opportunities appearing when using data coming from magnetic resonance imaging for measuring both anatomy and function of the vasculature.

20 May 2022 11h00 CET: Cecelia Pagliantini (Eindhoven University of Technology): Structure-preserving dynamical model order reduction of Hamiltonian systems

In this talk we will consider reduced basis methods (RBM) for the model order reduction of parametric Hamiltonian dynamical systems describing nondissipative phenomena. The development of RBM for Hamiltonian systems is challenged by two main factors: (i) failing to preserve the geometric structure encoding the physical properties of the dynamics, such as invariants of motion or symmetries, might lead to instabilities and unphysical behaviors of the resulting approximate solutions; (ii) the local low-rank nature of transport-dominated and nondissipative phenomena demands large reduced spaces to achieve sufficiently accurate approximations. We will discuss how to address these aspects via a structure-preserving nonlinear reduced basis approach based on dynamical low-rank approximation. The gist of the proposed method is to evolve low-dimensional surrogate models on a phase space that adapts in time while being endowed with the geometric structure of the full model. If time permits, we will also discuss a rank-adaptive extension of the proposed method where the dimension of the reduced space can change during the time evolution.

 28 Apr. 2022 14h00 CET: Nazanin Abedini (Vrije Universiteit Amsterdam): Convergence properties of a data-assimilation method based on a
Gauss-Newton iteration

Data assimilation is broadly used in many practical situations, such as weather forecasting, oceanography and subsurface modelling. There are some challenges in studying these physical systems. For example, their state cannot be directly and accurately observed or the underlying time-dependent system is chaotic which means that small changes in initial conditions can lead to large changes in prediction accuracy. The aim of data assimilation is to correct error in the state estimation by incorporating information from measurements into the mathematical model. The widely-used data-assimilation methods are variational methods. They aim at finding an optimal initial condition of the dynamical model such that the distance to observations is minimized (under a constraint of the estimate being a solution of the dynamical system). The problem is formulated as a minimization of a nonlinear least-square problem with respect to initial condition, and it is usually solved using a Gauss-Newton
method. We propose a variational data-assimilation method that minimizes a nonlinear least-square problem as well but with respect to a trajectory over a time window at once. The goal is to obtain a more accurate estimate. We prove method convergence in case of noise-free observations and provide error bound in case of noisy observations. We confirm our theoretical results with numerical experiments using Lorenz models.

24 Feb. 2022 15h00 CET: Laura Scarabosio (Radboud University): Deep neural network surrogates for transmission problems with geometric
uncertainties

We consider the point evaluation of the solution to interface problems with geometric uncertainties, where the uncertainty in the obstacle is
described by a high-dimensional parameter, as a prototypical example of non-smooth dependence of a quantity of interest on the parameter. We focus in particular on an elliptic interface problem and a Helmholtz transmission problem. The non-smooth parameter dependence poses a challenge when one is interested in building surrogates. In this talk we propose to use deep neural networks for this purpose. We provide a theoretical justification for why we expect neural networks to provide good surrogates. Furthermore, we present numerical experiments showing their good performance in practice. We observe in particular that neural networks do not suffer from the curse of dimensionality, and we study the dependence of the error on the number of point evaluations (which coincides with the number of discontinuities in the parameter space), as well as on several modeling parameters, such as the contrast between the two materials and, for the Helmholtz transmission problem, the wavenumber.

 

22 Jul. 2021 15h00 CET:  Ilias Bilionis (School of Mechanical Engineering, Purdue University): Situational awareness in extraterrestrial habitats: Open challenges, potential applications of physics-informed neural networks, and limitations

I will start with an overview of the research activities carried out by the Predictive Science Laboratory (PSL) at Purdue. In particular, I will use our work at the Resilient Extra-Terrestrial Habitats Institute (NASA) to motivate the need for physics-informed neural networks (PINNs) for high-dimensional uncertainty quantification (UQ), automated discovery of physical laws, and complex planning. The current state of these three problems ranges from manageable to challenging to open, respectively. The rest of the talk will focus on PINNs for high-dimensional UQ and, in particular, on stochastic PDEs. I will argue that for such problems, the squared integrated residual is not always the right choice. Using a stochastic elliptic PDE, I will derive a suitable variational loss function by extending the Dirichlet principle. This loss function exhibits (in the appropriate Hilbert space) a unique minimum that provably solves the desired stochastic PDE. Then, I will show how one can parameterize the solution using DNNs and construct a stochastic gradient descent algorithm that converges. Subsequently, I will present numerical evidence illustrating this approach's benefits to the squared integrated residual, and I will highlight its capabilities and limitations, including some of the remaining open problems.

Video

 23 Jun. 2021 10h00 CET: Christian Franzke (IBS Center for Climate Physics, Pusan National University in South Korea): Causality Detection and Multi-Scale Decomposition of the Climate System using Machine Learning

Detecting causal relationships and physically meaningful patterns from the complex climate system is an important but challenging problem. In
my presentation I will show recent progress for both problems using Machine Learning approaches. First, I will show that Reservoir Computing
is able to systematically identify causal relationships between variables. I will show evidence that Reservoir Computing is able to systematically identify the causal direction, coupling delay, and causal chain relations from time series. Reservoir Computing Causality has three advantages: (i) robustness to noisy time series; (ii) computational efficiency; and (iii) seamless causal inference from high-dimensional data. Second, I will  demonstrate that Multi-Resolution Dynamic Mode Decomposition can systematically identify physically meaningful patterns in high-dimensional climate data. In particular, Multi-resolution Dynamic Mode Decomposition is able to extract the changing annual cycle.

  

17 June 2021 15h00 CET: Bruno Sudret (ETH Zürich, chair Risk Safety and Uncertainty Quantification): Surrogate modelling approaches for stochastic
simulators

Computational models, a.k.a. simulators, are used in all fields of engineering and applied sciences to help design and assess complex systems in silico. Advanced analyses such as optimization or uncertainty quantification, which require repeated runs by varying input parameters, cannot be carried out with brute force methods such as Monte Carlo simulation due to computational costs. Thus the recent development of surrogate models such as polynomial chaos expansions and Gaussian processes, among others. For so-called stochastic simulators used e.g. in epidemiology, mathematical finance or wind turbine design, an intrinsic source of stochasticity exists on top of well-identified system parameters. As a consequence, for a given vector of inputs, repeated runs of the simulator (called replications) will provide different results, as opposed to the case of deterministic simulators. Consequently, for each single input, the response is a random variable to be characterized.
In this talk we present an overview of the literature devoted to building surrogate models of such simulators, which we call stochastic emulators. Then we focus on a recent approach based on generalized lambda distributions and polynomial chaos expansions. The approach can be used
with or without replications, which brings efficiency and versatility. As an outlook, practical applications to sensitivity analysis will also be presented.
Acknowledgments: This work is carried out together with Xujia Zhu, a PhD. student supported by the Swiss National Science Foundation under Grant Number #175524 “SurrogAte Modelling for stOchastic Simulators (SAMOS)”.

 Video

10 Jun. 2021 16h00 CET: Hannah Christensen (Oxford): Machine Learning for Stochastic Parametrisation

Atmospheric models used for weather and climate prediction are traditionally formulated in a deterministic manner. In other words, given a particular state of the resolved scale variables, the most likely forcing from the sub-grid scale motion is estimated and used to predict the evolution of the large-scale flow. However, the lack of scale-separation in the atmosphere means that this approach is a large source of error in forecasts. Over the last decade an alternative paradigm has developed: the use of stochastic techniques to characterise uncertainty in small-scale processes. These techniques are now widely used across weather, seasonal forecasting, and climate timescales.

While there has been significant progress in emulating parametrisation schemes using machine learning, the focus has been entirely on deterministic parametrisations. In this presentation I will discuss data driven approaches for stochastic parametrisation. I will describe experiments which develop a stochastic parametrisation using the generative adversarial network (GAN) machine learning framework for a simple atmospheric model. I will conclude by discussing the potential for this approach in complex weather and climate prediction models.

 Video

21 May 2021 16h00: John Harlim (Penn state): Machine learning of missing dynamical systems

In the talk, I will discuss a general closure framework to compensate for the model error arising from missing dynamical systems. The
proposed framework reformulates the model error problem into a supervised learning task to estimate a very high-dimensional closure model, deduced from the Mori-Zwanzig representation of a projected dynamical system with projection operator chosen based on Takens embedding theory. Besides theoretical convergence, this connection provides a systematic framework for closure modeling using available machine learning algorithms. I will demonstrate numerical results using a kernel-based linear estimator as well as neural network-based nonlinear estimators. If time permits, I will also discuss error bounds and mathematical conditions that allow for the estimated model to reproduce the underlying stationary statistics, such as one-point statistical moments and auto-correlation functions, in the context of learning Ito diffusions.

 Video

29 Apr. 2021 16h15: Nathaniel Trask (Sandia): Structure preserving deep learning architectures for convergent and stable data-driven modeling

The unique approximation properties of deep architectures have attracted attention in recent years as a foundation for data-driven modeling in scientific machine learning (SciML) applications. The "black-box" nature of DNNs however require large amounts of data that generalize poorly in traditional engineering settings where available data is relatively small, and it is generally difficult to provide a priori guarantees about the accuracy and stability of extracted models. We adopt the perspective that tools from mimetic discretization of PDEs may be adapted to SciML settings, developing architectures and fast optimizers tailored to the specific needs of SciML. In particular, we focus on: realizing convergence competitive with FEM, preserving topological structure fundamental to conservation and multiphysics, and providing stability guarantees. In this talk we introduce some motivating applications at Sandia spanning shock magnetohydrodynamics and semiconductor physics before providing an overview of the mathematics underpinning these efforts.

 

25 Mar. 2021 15h00: Jurriaan Buist (CWI): Energy conservation for the one-dimensional two-fluid model for two-phase pipe flow

The one-dimensional two-fluid model (TFM) is a simplified model for multiphase flow in pipes. It is derived from a spatial averaging process, which introduces a closure problem concerning the wall and interface friction terms, similar to the closure problem in turbulence. To tackle this closure problem, we have approximated the friction terms by neural networks trained on data from DNS simulations.

Besides the closure problem, the two-fluid model has a long-standing stability issue: it is only conditionally well-posed. In order to tackle
this issue, we analyze the underlying structure of the TFM in terms of the energy behavior. We show the new result that energy is an inherent
'secondary' conserved property of the mass and momentum conservation equations of the model. Furthermore, we develop a new spatial
discretization that exactly conserves this energy in simulations.

The importance of structure preservation, and the core of our analysis, is not limited to the TFM. Neural networks that approximate physical
systems can also be designed to preserve the underlying structure of a PDE. In this way, physics-informed machine learning can yield more
physical results.

 

11 Mar. 2021 15h30: Benjamin Sanderse (CWI): Multi-Level Neural Networks for PDEs with Uncertain Parameters

A novel multi-level method for partial differential equations with uncertain parameters is proposed. The principle behind the method is that the error between grid levels in multi-level methods has a spatial structure that is by good approximation independent of the actual grid level. Our method learns this structure by employing a sequence of convolutional neural networks, that are well-suited to automatically detect local error features as latent quantities of the solution. Furthermore, by using the concept of transfer learning, the information of coarse grid levels is reused on fine grid levels in order to minimize the required number of samples on fine levels. The method outperforms state-of-the-art multi-level methods, especially in the case when complex PDEs (such as single-phase and free-surface flow problems) are concerned, or when high accuracy is required.

 

25 Feb. 2021, 15h00: Maximilien de Zordo-Banliat (Safran TechDynfluid Laboratory): Space-dependent Bayesian model averaging of turbulence models for compressor cascade flows.

Predictions of systems described by multiple alternative models is of importance for many applications in science and engineering, namely when it is not possible to identify a model that significantly outperforms every other model for each criterion. Two mathematical approaches tackling this is-sue are Bayesian Model Averaging (BMA) [1, 2], which builds an average of the concurrent models weighted by their marginal posteriors probabilities, and Stacking [3, 4], where the unknown prediction is projected on a basis of alternative models, with weights to be learned from data. In both approaches, the weights are generally constant throughout the domain. More recently, Yu et al. [5] have proposed the Clustered Bayesian Averaging (CBA) algorithm, which leverages an ensemble of Regression Trees (RT) to infer weights as space-dependent functions. Similarly, we propose a Space-Dependent Stacking (SDS) algorithm which modifies the stacking formalism to include space-dependent weights, based on a spatial decomposition method.

In this work, the above-mentioned methods are investigated in a Computational Fluid Dynamics(CFD) context. Specifically, CFD of engineering systems often relies on Reynolds-Averaged Navier-Stokes (RANS) models to describe the effect of (unresolved) turbulent motions onto the mean (re-solved) field. Since all turbulent motions are modelled, RANS turbulence models tend to be uncertain and case-dependent. Quantifying and reducing such uncertainties is then of the utmost importance for aerodynamics design in general, and specifically for the analysis and optimization of complex turbo machinery flows. In previous work [6], the present authors used Bayesian model averages of RANS models for providing improved predictions of a compressor cascade configuration, alongside with a quantification of confidence intervals associated with modelling uncertainties. Constant weights throughout the field were used. It is however expected, based on theoretical considerations and ex-pert judgment, that different RANS models tend to perform better in some flow regions and less in other regions, and consequently they should be assigned space-varying weights. For this reason, we implement and assess space-dependent averages of RANS models for compressor flow predictions. More precisely the focus is put on two alternative algorithms: (i) a version of CBA adapted to flow variable fields, and (ii) a Space-Dependent Stacking (SDS) method based on Karhunen-Loeve decomposition of the mixture weights. Flow regions are described using selected features, formulated as functions of the mean flow quantities. Given a set of concurrent RANS models and a database of reference flow data corresponding to various operating conditions, the two algorithms are first trained against data, and subsequently applied to the prediction of an unobserved flow, i.e. another operating condition of the compressor cascade. The algorithms assign a probability to each model in each region of the feature space, based on their ability to accurately predict selected Quantities of Interest (QoI) in this region. The space-dependent weighted-average of the RANS models applied to the prediction scenario is used to reconstruct the expected solution and the associated confidence intervals. Preliminary results show that both of the methods generally yield more accurate solutions than the constant-weight BMA method, and provide a valuable estimate of the uncertainty intervals.

 References

[1] David Madigan, Adrian E Raftery, C Volinsky, and J Hoeting.