Seminar for machine learning and UQ in scientific computing

Seminar on machine learning and uncertainty quantification for scientific computing, organised by the scientific computing group at CWI.

The focus of this seminar is application of Machine Learning (ML)  and Uncertainty Quantification in scientific computing. Topics of interest include, among others: 

  • combination of data-driven models and (multi scale) simulations
  • new ML architectures suited for scientific computing or UQ,   
  • incorporation of (physical) constraints into data-driven models,   
  • efficient (online) learning strategies,   
  • using ML for dimension reduction / creating surrogates,   
  • inverse problems using ML surrogates, 

and any other topic in which some form of ML and/or UQ is used to enhance (existing) scientific computing methodologies. All applications are welcome, be it financial, physical, biological or otherwise.

For more information, or if you'd like to attend one of the talks, please contact Wouter Edeling of the SC group.

Schedule upcoming talks:

11 Mar. 2021 15h30: Benjamin Sanderse (CWI): Continuous or Discrete? Does it matter?

 Partial differential equations (PDEs) are widely used to describe many physical phenomena, such as fluid flows. To solve these PDEs, a wide variety of methods have been designed; for example discretization techniques to approximate spatial and temporal derivatives; filtering techniques to remove small scales; reduced-order modeling techniques to remove solution components with little energy; and recently also machine learning techniques. In all of these techniques a fundamental question is at what point we switch from a continuous to a discrete representation. For example: should we filter the continuous equations, and then discretize; or first discretize, and then filter? In this presentation I will try to give a bird's eye view of my experience with this question, including several examples when 'discretizing first' is the preferred route. I will then also make the connection to continuous and discrete representations of neural networks.

25 Mar. 2021: Jurriaan Buist (CWI)

8 Apr. 2021: Nikolaj Mücke (CWI)

22 Apr. 2021: Luis Souto (CWI)

6 May 2021: John Harlim (Penn state)

10 Jun. 2021: Hannah Christensen (Oxford)

24 Jun. 2021: Federica Gugole (CWI)

Previous talks

25 Feb. 2021, 15h00: Maximilien de Zordo-Banliat (Safran Tech, Dynfluid Laboratory): Space-dependent Bayesian model averaging of turbulence models for compressor cascade flows.

Abstract

Predictions of systems described by multiple alternative models is of importance for many applications in science and engineering, namely when it is not possible to identify a model that significantly outperforms every other model for each criterion. Two mathematical approaches tackling this is-sue are Bayesian Model Averaging (BMA) [1, 2], which builds an average of the concurrent models weighted by their marginal posteriors probabilities, and Stacking [3, 4], where the unknown prediction is projected on a basis of alternative models, with weights to be learned from data. In both approaches, the weights are generally constant throughout the domain. More recently, Yu et al. [5] have proposed the Clustered Bayesian Averaging (CBA) algorithm, which leverages an ensemble of Regression Trees (RT) to infer weights as space-dependent functions. Similarly, we propose a Space-Dependent Stacking (SDS) algorithm which modifies the stacking formalism to include space-dependent weights, based on a spatial decomposition method.

In this work, the above-mentioned methods are investigated in a Computational Fluid Dynamics(CFD) context. Specifically, CFD of engineering systems often relies on Reynolds-Averaged Navier-Stokes (RANS) models to describe the effect of (unresolved) turbulent motions onto the mean (re-solved) field. Since all turbulent motions are modelled, RANS turbulence models tend to be uncertain and case-dependent. Quantifying and reducing such uncertainties is then of the utmost importance for aerodynamics design in general, and specifically for the analysis and optimization of complex turbo machinery flows. In previous work [6], the present authors used Bayesian model averages of RANS models for providing improved predictions of a compressor cascade configuration, alongside with a quantification of confidence intervals associated with modelling uncertainties. Constant weights throughout the field were used. It is however expected, based on theoretical considerations and ex-pert judgment, that different RANS models tend to perform better in some flow regions and less in other regions, and consequently they should be assigned space-varying weights. For this reason, we implement and assess space-dependent averages of RANS models for compressor flow predictions. More precisely the focus is put on two alternative algorithms: (i) a version of CBA adapted to flow variable fields, and (ii) a Space-Dependent Stacking (SDS) method based on Karhunen-Loeve decomposition of the mixture weights. Flow regions are described using selected features, formulated as functions of the mean flow quantities. Given a set of concurrent RANS models and a database of reference flow data corresponding to various operating conditions, the two algorithms are first trained against data, and subsequently applied to the prediction of an unobserved flow, i.e. another operating condition of the compressor cascade. The algorithms assign a probability to each model in each region of the feature space, based on their ability to accurately predict selected Quantities of Interest (QoI) in this region. The space-dependent weighted-average of the RANS models applied to the prediction scenario is used to reconstruct the expected solution and the associated confidence intervals. Preliminary results show that both of the methods generally yield more accurate solutions than the constant-weight BMA method, and provide a valuable estimate of the uncertainty intervals.

 References

[1] David Madigan, Adrian E Raftery, C Volinsky, and J Hoeting. Bayesian model averaging. In Proceedings of the AAAI Workshop on Integrating Multiple Learned Models, Portland, OR, pages 77–83,1996.

[2] Jennifer A. Hoeting, David Madigan, Adrian E. Raftery, and Chris T. Volinsky. Correction to: “bayesian model averaging: a tutorial” [statist. sci. 14 (1999), no. 4, 382–417; mr 2001a:62033]. Statist. Sci., 15(3):193–195, 08 2000.

[3] David H. Wolpert. Stacked generalization. Neural networks, 5(2):241–259, 1992.

[4] Leo Breiman. Stacked regressions. Machine learning, 24(1):49–64, 1996.

[5] Qingzhao Yu, Steven N. MacEachern, and Mario Peruggia. Clustered bayesian model averaging. Bayesian Anal., 8(4):883–908, 12 2013.

[6] M. de Zordo-Banliat, X. Merle, G. Dergham, and P. Cinnella. Bayesian model-scenario averaged predictions of compressor cascade flows under uncertain turbulence models. Computers & Fluids201:104473, 2020.

 11 Feb. 2021, 13h00: Stephan Rasp (ClimateAI): Hybrid machine learning-physics climate modeling: challenges and potential solutions

Climate models still exhibit large structural errors caused by uncertainties in the representation of sub-grid processes. The recent availability of short-term high-resolution simulations motived the development of machine learning-based sub-grid schemes. Such hybrid models have been shown to have the potential to improve the representation of processes such as convection and radiation but fundamental problems remain, in particular instabilities and model drifts. In this talk I will discuss some potential solutions to the key issues facing hybrid modeling.

21 Jan. 2021, 16h00: Kelbij Star (SCK CEN/ UGent): Reduced order modeling for incompressible flows with heat transfer and parametric boundary conditions

Complex fluid dynamics problems are usually solved numerically using discretization methods. However, these methods are often unfeasible for applications that require almost in real-time modeling or testing of a large number of different system configurations, for instance for control purposes, sensitivity analyses or uncertainty quantification studies. This has stimulated the development of modeling techniques that reduce the number of degrees of freedom of the high fidelity fluid flow and heat transfer models. Mathematical techniques are used to extract “features” of the high fidelity model in order to replace them by a more simplified model. In that way, the required computational time and computer memory usage is reduced. In this presentation, I will present and discuss some reduced order modeling methods for incompressible flows and heat transfer problems with the focus on the challenge to impose parametric boundary conditions at the reduced order level.

14 Jan. 2021, 16h00: Jorino van Rhijn (CWI): Mind the Map: Generating Strong Approximations of Financial SDEs with GANs

Generative adversarial networks (GANs) have shown promising results when applied on partial differential equations and financial time series generation. We investigate if GANs can be used to provide a strong approximation to the solution of one-dimensional stochastic differential equations (SDEs) using large time steps. Standard GANs are only able to approximate processes in conditional distribution, yielding a weak approximation to the SDE. A GAN architecture is proposed that enables strong approximation, called the constrained GAN. The discriminator of this GAN is informed with the map between the prior input to the generator and the corresponding output samples. The GAN is trained on a dataset obtained with exact simulation. The discriminator enforces path-wise equality between the exactly simulated training samples and the generator output. The architecture was tested on geometric Brownian motion (GBM) and the Cox Ingersoll Ross (CIR) process, where it was conditioned on a range of time steps and previous values of the asset process. The constrained GAN outperformed discrete-time schemes in strong error on a discretisation with large time steps. It also outperformed the standard conditional GAN when approximating the conditional distribution. We discuss methods to extend the constrained GAN to general one-dimensional Ito SDEs, for which exact simulation may not be available. In future work, the constrained GAN could be conditioned on the SDE parameter set as well, allowing it to learn an entire family of solutions simultaneously.

7 Jan. 2021, 16h00: Boris Boonstra (CWI): Valuation of electricity storage contracts using the COS method.

In this presentation, we will introduce a numerical pricing technique, the well-known COS method, to solve the dynamic programming formulation of various financial derivatives in the electricity market. In particular, the electricity storage contract is discussed and priced with the COS method. Storage of electricity has become increasingly important, due to the gradual replacement of fossil fuels by more variable and uncertain renewable energy sources. The electricity storage contract is a contract where electricity can be sold/bought at fixed moments in time by trading on the electricity market in order to make a profit while the energy level in the storage changes. For example, an application of the electricity storage contract is to investigate whether a parking lot with electric cars can be utilized to make a profit by trading on the electricity market, by using the batteries of the stationary cars. The main idea of the COS method is to approximate the conditional probability density function via the Fourier cosine expansion and make use of the relation between the coefficients of the Fourier cosine expansion and the characteristic function. The use of the characteristic function is convenient, because the density function is often unknown for relevant asset processes, in contrast to the characteristic function.

16 Dec. 2020, 14h30:  Stef Maree (CWI): DroneSurance: Mathematical modelling of pay-as-you-fly drone insurance

Commercial drone usage is rapidly growing, for example for the delivery of parcels, surveillance, and the inspection of dikes, crops, or windmills. DroneSurance is a tool for drone users to purchase insurance only when they need it, and as long as they need it. By incorporating flight-specific information, fairer premiums can be determined compared to traditional approaches. DroneSurance is funded via an EIT digital grant, and is developed together with partners Bright Cape (NL), Achmea (NL), Universidad Politécnica de Madrid (Spain) and EURAPCO (Switzerland). The role of CWI in this project is to build a simple but robust prediction model for drone damage, that initially uses expert knowledge and can gradually incorporate data when policies are being sold and data is gathered. To this end, we have adopted a flexible framework offered by Bayesian Generalised Linear Models (BGLM).

 10 Dec. 2020, 15h00: Vinit Dighe (CWI): Efficient Bayesian calibration of aeroelastic wind turbine models using surrogate modelling.

A very common approach to study the aerodynamic performances of wind turbines is by employing low-fidelity aeroelastic models. However, in several situations, the accuracy of such models can be unsatisfactory when comparing the results of the model predictions with the experiments. In the past, the calibration of the model parameters was done manually by trial and error approaches, which was inefficient and prone to error. In order to address this issue, a rigorous approach to the calibration problem is proposed by recasting it in a probabilistic setting using a Bayesian framework. To make this framework feasible, the procedure is accelerated through the use of a polynomial chaos expansion based surrogate model. The calibration is finally validated by using the calibrated model parameters to predict the aerodynamic performance of wind turbines in two different experimental setups.

 

 

 Previous UQ seminar talks:

This seminar is a continuation of the UQ seminar. Presentations available internally on https://oc.cwi.nl.

2020

12 Mar. 2020: Martin Janssens (TU Delft): Machine learning unresolved turbulence in a variational multiscale model

Models for the influence of unresolved processes on resolved scales remain a prominent error source in numerical simulations of engineering and atmospheric flows. In recent years, improvements in the capacity of machine learning algorithms and the increasing availability of high-fidelity datasets have identified data-driven unresolved scales models in general and Artificial Neural Networks (ANNs) in particular as high-potential options to break the deadlock. Yet, early contributions in this field rely on inconsistent multiscale model formulations and are plagued by numerical instability. To sketch a clearer picture on the sources of the accuracy and instability of ANN unresolved scales models, we have developed a framework in which no assumptions on the model form are made. We use ANNs to infer exact projections of the unresolved scales processes on the resolved degrees of freedom. Such "interaction terms" naturally arise from Variational Multiscale Model (VMM) formulations. Our VMM-ANN framework limits error to the data-driven interaction term approximations, offering explicit insight into their functioning. We assess our model in the context of a one-dimensional, forced Burgers’ problem, for a range of simple and realistic forcings. Simple, feedforward ANNs with local input trained on error-free data a priori to inserting them in forward simulations (offline) strongly improve the prediction of the interaction terms of our problem compared to traditional, algebraic VMM closures in offline settings at various levels of discretisation; they also generalise well to uncorrelated instances of our forcing. However, this performance does not translate to simulations of forward problems. The model suffers from instability due to i) ill-posed nonlinear solution procedures and ii) self-inflicted error accumulation. These correspond to two dimensions of forward simulations that are not accounted for by offline training on error-free data. We show that introducing noise to the training data can partly remedy these problems in our simple setting. Yet, we conclude that appreciable challenges remain in order to capitalise on the promise offered by ANNs to improve the unresolved scales modelling of turbulence.

3 Feb. 2020: Akil Narayan (Uni Utah, US): Low-rank algorithms for multi-level and multi-fidelity simulations in uncertainty quantification

High-fidelity computational software is often a trusted simulated-based tool for physics-based prediction of real-world phenomena. The tradeoff of such high-fidelity software in delivering an accurate prediction is substantial computational expense. In such situations performing uncertainty quantification on these simulations, often requiring several queries of the software, is infeasible. Alternatively, low-fidelity simulations are often orders of magnitude less expensive, but yield inaccurate physical predictions. In more complicated cases, one is faced with multiple models with multiple fidelity levels, and must make a decision about where to allocate computational resources. In this talk we explore low-rank strategies for addressing such multi-fidelity problems. While low-fidelity models are of dubious predictive value, they can be of substantial value in terms of quantifying uncertainty through low-rank methods. We explore the practical and mathematical aspects of these approaches, and demonstrate their effectiveness on problems in molecular dynamics, linear elasticity, and topology optimization.

2019

5 Dec. 2019: Yous van Halder (CWI): Multi-level surrogate model construction with convolutional neural networks
In this talk, we will present a novel approach to use convolutional neural networks to improve multi-level Monte Carlo estimators. Next to the classic idea of variance decay upon grid refinement, we employ the idea that the PDE error behavior is similar between consecutive levels when the grid is fine enough. Based on these ideas, we design the following neural network: a convolutional neural network that extracts local error features as latent quantities, a fully connect network to map local errors to global errors, subsequently extended with transfer learning to efficiently learn the error behavior on finer grids. We show promising results on nonlinear PDEs, including the incompressible Navier-Stokes equations.

21 Nov. 2019: Philippe Blondeel (KU Leuven): p-refined Multilevel Quasi-Monte Carlo for Galerkin Finite Element Methods with applications in Civil Engineering
Practical civil engineering problems are often characterized by uncertainty in their material parameters. Discretization of the underlying equations is typically done by means of the Galerkin Finite Element method. The uncertain material parameter can then be expressed as a random field that can be represented by, for example, a Karhunen–Loève expansion. Computation of the stochastic response is very costly, even if state-of-the-art Multilevel Monte Carlo (MLMC) is used. A significant cost reduction can be achieved by using p-refined Multilevel Quasi-Monte Carlo (p-MLQMC). The method is based on the idea of variance reduction by employing a hierarchical discretization of the problem based on a p-refinement scheme. This novel method is then combined with a rank-1 lattice rule yielding faster convergence compared to the method based on random Monte Carlo points. This method is first benchmarked on an academic beam bending problem. Finally, we use our algorithm for the assessment of the stability of slopes, a problem that arises in geotechnical engineering.

7 Nov. 2019: Georgios Pilikos (CWI): Bayesian Machine Learning for Seismic Compressive Sensing
We will introduce Bayesian machine learning for efficient seismic surveys. This will include data-driven models that learn sparse representations (features) of the data and built around the framework of Compressive Sensing. Furthermore, we will show that using these models, it is possible to predict missing receivers' values and simultaneously quantify the uncertainty of these predictions.

24 Oct. 2019: Anna Nikishova (UvA): Sensitivity analysis based dimension reduction of multiscale models
Sensitivity analysis (SA) recognizes the effects of uncertainty in the model inputs to the output parameters. Whenever the variance is a representative measure of model uncertainty, Sobol variance-based method is the preferred approach to identify the main sources of uncertainty. Additionally, SA can be applied to identify ineffectual inputs parameters in order to decrease the model dimensionality by equating such parameters to their mean values.
In this talk, we demonstrate that in some cases SA of a single scale model provides information on the sensitivity of the final multiscale model output. This then can be employed to reduce the dimensionality of the multiscale model input. However, the sensitivity of a single scale model response does not always bound the sensitivity of the multiscale model output. Hence, an analysis of the function defining the relation between single scale components is required to understand whether single scale SA can be used to reduce the dimensionality of the overall multiscale model input space.

26 Sept. 2019: Nikolaj Mucke (CWI): Non-Intrusive Reduced Order Modeling of Nonlinear PDE-Constrained Optimization using Artificial Neural Networks
Nonlinear PDE-constrained optimization problems are computationally very time consuming to solve numerically. Hence, there is much to be gained from replacing the full order model with a reduced order surrogate model. Using conventional methods, such as proper orthogonal decomposition (POD), often yields a such reduced order model but doesn't necessarily cut down computation time for nonlinear problems due to the intrusive nature of the method. As an alternative, artificial neural networks, combined with POD, are here presented as a viable non-intrusive surrogate model that cuts down computation time significantly.

The talk will be divided into three parts: 1) a brief introduction to PDE-constrained optimization and a discussion about why such problems are computationally heavy to solve, 2) an introduction to POD in the context of PDE-constrained optimization, 3) a presentation of how neural networks can be utilized as a surrogate model.

5 Sept. 2019: Kelbij Star (SCK•CEN / Ghent University), POD-Galerkin Reduced Order Model of the Boussinesq approximation for buoyancy-driven flows
A parametric Reduced Order Model (ROM) for buoyancy driven-flows is presented for which the Full Order Model (FOM) is based on the finite volume approximation. To model the buoyancy, a Boussinesq approximation is applied. Therefore, there exists a two-way coupling between the incompressible Boussinesq equations and the energy equation. The ROM is obtained by performing a Galerkin projection of the governing equations onto a reduced basis space that has been constructed using a Proper Orthogonal Decomposition approach. The ROM is tested on a 2D differentially heated cavity, of which the wall temperatures are parametrized using a control function method. Furthermore, the issues and challenges of Reduced Order Modeling, like stability, non-linearity and boundary control, are discussed. Finally, attention will be paid to the training of the ROM and especially for the application of Uncertainty Quantification. 

22 Aug. 2019: Hemaditya Malla (CWI - TU/e), Response-based quadrature rules for uncertainty quantification
Forward propagation problems in uncertainty quantification using polynomial-based surrogates involve the numerical approximation of integrals using quadrature rules. Such numerical approximations require function evaluations that are often costly. Most quadrature rules in literature are constructed so as to be able to exactly integrate polynomials upto a given degree. The accuracy of such quadrature rules depends on the smoothness of the function being integrated. In order to integrate functions that lack smoothness this approach requires a large number of function evaluations to reach a certain accuracy. In this talk, an algorithm is introduced that generates quadrature rules of higher accuracy to integrate such types of functions that requires less function evaluations. 

25 Apr. 2019: Laurent van den Bos (CWI)