Seminar for machine learning and UQ in scientific computing

Seminar on machine learning and uncertainty quantification for scientific computing, organised by the scientific computing group at CWI.

This seminar is organized by the Scientific Computing group of CWI Amsterdam. The focus is on the application of Machine Learning (ML)  and Uncertainty Quantification in scientific computing. Topics of interest include, among others: 

  • combination of data-driven models and (multi scale) simulations
  • new ML architectures suited for scientific computing or UQ,   
  • incorporation of (physical) constraints into data-driven models,   
  • efficient (online) learning strategies,   
  • using ML for dimension reduction / creating surrogates,   
  • inverse problems using ML surrogates, 

and any other topic in which some form of ML and/or UQ is used to enhance (existing) scientific computing methodologies. All applications are welcome, be it financial, physical, biological or otherwise.

For more information, or if you'd like to attend one of the talks, please contact Wouter Edeling of the SC group.

Schedule upcoming talks:

 27 Jan. 2022 16h00 CET: Laura Scarabosio (Radboud University): TBD

Previous talks

22 Jul. 2021 15h00 CET:  Ilias Bilionis (School of Mechanical Engineering, Purdue University): Situational awareness in extraterrestrial habitats: Open challenges, potential applications of physics-informed neural networks, and limitations

I will start with an overview of the research activities carried out by the Predictive Science Laboratory (PSL) at Purdue. In particular, I will use our work at the Resilient Extra-Terrestrial Habitats Institute (NASA) to motivate the need for physics-informed neural networks (PINNs) for high-dimensional uncertainty quantification (UQ), automated discovery of physical laws, and complex planning. The current state of these three problems ranges from manageable to challenging to open, respectively. The rest of the talk will focus on PINNs for high-dimensional UQ and, in particular, on stochastic PDEs. I will argue that for such problems, the squared integrated residual is not always the right choice. Using a stochastic elliptic PDE, I will derive a suitable variational loss function by extending the Dirichlet principle. This loss function exhibits (in the appropriate Hilbert space) a unique minimum that provably solves the desired stochastic PDE. Then, I will show how one can parameterize the solution using DNNs and construct a stochastic gradient descent algorithm that converges. Subsequently, I will present numerical evidence illustrating this approach's benefits to the squared integrated residual, and I will highlight its capabilities and limitations, including some of the remaining open problems.

Video

 23 Jun. 2021 10h00 CET: Christian Franzke (IBS Center for Climate Physics, Pusan National University in South Korea): Causality Detection and Multi-Scale Decomposition of the Climate System using Machine Learning

Detecting causal relationships and physically meaningful patterns from the complex climate system is an important but challenging problem. In
my presentation I will show recent progress for both problems using Machine Learning approaches. First, I will show that Reservoir Computing
is able to systematically identify causal relationships between variables. I will show evidence that Reservoir Computing is able to systematically identify the causal direction, coupling delay, and causal chain relations from time series. Reservoir Computing Causality has three advantages: (i) robustness to noisy time series; (ii) computational efficiency; and (iii) seamless causal inference from high-dimensional data. Second, I will  demonstrate that Multi-Resolution Dynamic Mode Decomposition can systematically identify physically meaningful patterns in high-dimensional climate data. In particular, Multi-resolution Dynamic Mode Decomposition is able to extract the changing annual cycle.

  

17 June 2021 15h00 CET: Bruno Sudret (ETH Zürich, chair Risk Safety and Uncertainty Quantification): Surrogate modelling approaches for stochastic
simulators

Computational models, a.k.a. simulators, are used in all fields of engineering and applied sciences to help design and assess complex systems in silico. Advanced analyses such as optimization or uncertainty quantification, which require repeated runs by varying input parameters, cannot be carried out with brute force methods such as Monte Carlo simulation due to computational costs. Thus the recent development of surrogate models such as polynomial chaos expansions and Gaussian processes, among others. For so-called stochastic simulators used e.g. in epidemiology, mathematical finance or wind turbine design, an intrinsic source of stochasticity exists on top of well-identified system parameters. As a consequence, for a given vector of inputs, repeated runs of the simulator (called replications) will provide different results, as opposed to the case of deterministic simulators. Consequently, for each single input, the response is a random variable to be characterized.
In this talk we present an overview of the literature devoted to building surrogate models of such simulators, which we call stochastic emulators. Then we focus on a recent approach based on generalized lambda distributions and polynomial chaos expansions. The approach can be used
with or without replications, which brings efficiency and versatility. As an outlook, practical applications to sensitivity analysis will also be presented.
Acknowledgments: This work is carried out together with Xujia Zhu, a PhD. student supported by the Swiss National Science Foundation under Grant Number #175524 “SurrogAte Modelling for stOchastic Simulators (SAMOS)”.

 Video

10 Jun. 2021 16h00 CET: Hannah Christensen (Oxford): Machine Learning for Stochastic Parametrisation

Atmospheric models used for weather and climate prediction are traditionally formulated in a deterministic manner. In other words, given a particular state of the resolved scale variables, the most likely forcing from the sub-grid scale motion is estimated and used to predict the evolution of the large-scale flow. However, the lack of scale-separation in the atmosphere means that this approach is a large source of error in forecasts. Over the last decade an alternative paradigm has developed: the use of stochastic techniques to characterise uncertainty in small-scale processes. These techniques are now widely used across weather, seasonal forecasting, and climate timescales.

While there has been significant progress in emulating parametrisation schemes using machine learning, the focus has been entirely on deterministic parametrisations. In this presentation I will discuss data driven approaches for stochastic parametrisation. I will describe experiments which develop a stochastic parametrisation using the generative adversarial network (GAN) machine learning framework for a simple atmospheric model. I will conclude by discussing the potential for this approach in complex weather and climate prediction models.

 Video

21 May 2021 16h00: John Harlim (Penn state): Machine learning of missing dynamical systems

In the talk, I will discuss a general closure framework to compensate for the model error arising from missing dynamical systems. The
proposed framework reformulates the model error problem into a supervised learning task to estimate a very high-dimensional closure model, deduced from the Mori-Zwanzig representation of a projected dynamical system with projection operator chosen based on Takens embedding theory. Besides theoretical convergence, this connection provides a systematic framework for closure modeling using available machine learning algorithms. I will demonstrate numerical results using a kernel-based linear estimator as well as neural network-based nonlinear estimators. If time permits, I will also discuss error bounds and mathematical conditions that allow for the estimated model to reproduce the underlying stationary statistics, such as one-point statistical moments and auto-correlation functions, in the context of learning Ito diffusions.

 Video

29 Apr. 2021 16h15: Nathaniel Trask (Sandia): Structure preserving deep learning architectures for convergent and stable data-driven modeling

The unique approximation properties of deep architectures have attracted attention in recent years as a foundation for data-driven modeling in scientific machine learning (SciML) applications. The "black-box" nature of DNNs however require large amounts of data that generalize poorly in traditional engineering settings where available data is relatively small, and it is generally difficult to provide a priori guarantees about the accuracy and stability of extracted models. We adopt the perspective that tools from mimetic discretization of PDEs may be adapted to SciML settings, developing architectures and fast optimizers tailored to the specific needs of SciML. In particular, we focus on: realizing convergence competitive with FEM, preserving topological structure fundamental to conservation and multiphysics, and providing stability guarantees. In this talk we introduce some motivating applications at Sandia spanning shock magnetohydrodynamics and semiconductor physics before providing an overview of the mathematics underpinning these efforts.

 Video

25 Mar. 2021 15h00: Jurriaan Buist (CWI): Energy conservation for the one-dimensional two-fluid model for two-phase pipe flow

The one-dimensional two-fluid model (TFM) is a simplified model for multiphase flow in pipes. It is derived from a spatial averaging process, which introduces a closure problem concerning the wall and interface friction terms, similar to the closure problem in turbulence. To tackle this closure problem, we have approximated the friction terms by neural networks trained on data from DNS simulations.

Besides the closure problem, the two-fluid model has a long-standing stability issue: it is only conditionally well-posed. In order to tackle
this issue, we analyze the underlying structure of the TFM in terms of the energy behavior. We show the new result that energy is an inherent
'secondary' conserved property of the mass and momentum conservation equations of the model. Furthermore, we develop a new spatial
discretization that exactly conserves this energy in simulations.

The importance of structure preservation, and the core of our analysis, is not limited to the TFM. Neural networks that approximate physical
systems can also be designed to preserve the underlying structure of a PDE. In this way, physics-informed machine learning can yield more
physical results.

Video 

11 Mar. 2021 15h30: Benjamin Sanderse (CWI): Multi-Level Neural Networks for PDEs with Uncertain Parameters

A novel multi-level method for partial differential equations with uncertain parameters is proposed. The principle behind the method is that the error between grid levels in multi-level methods has a spatial structure that is by good approximation independent of the actual grid level. Our method learns this structure by employing a sequence of convolutional neural networks, that are well-suited to automatically detect local error features as latent quantities of the solution. Furthermore, by using the concept of transfer learning, the information of coarse grid levels is reused on fine grid levels in order to minimize the required number of samples on fine levels. The method outperforms state-of-the-art multi-level methods, especially in the case when complex PDEs (such as single-phase and free-surface flow problems) are concerned, or when high accuracy is required.

Video 

25 Feb. 2021, 15h00: Maximilien de Zordo-Banliat (Safran TechDynfluid Laboratory): Space-dependent Bayesian model averaging of turbulence models for compressor cascade flows.

Predictions of systems described by multiple alternative models is of importance for many applications in science and engineering, namely when it is not possible to identify a model that significantly outperforms every other model for each criterion. Two mathematical approaches tackling this is-sue are Bayesian Model Averaging (BMA) [1, 2], which builds an average of the concurrent models weighted by their marginal posteriors probabilities, and Stacking [3, 4], where the unknown prediction is projected on a basis of alternative models, with weights to be learned from data. In both approaches, the weights are generally constant throughout the domain. More recently, Yu et al. [5] have proposed the Clustered Bayesian Averaging (CBA) algorithm, which leverages an ensemble of Regression Trees (RT) to infer weights as space-dependent functions. Similarly, we propose a Space-Dependent Stacking (SDS) algorithm which modifies the stacking formalism to include space-dependent weights, based on a spatial decomposition method.

In this work, the above-mentioned methods are investigated in a Computational Fluid Dynamics(CFD) context. Specifically, CFD of engineering systems often relies on Reynolds-Averaged Navier-Stokes (RANS) models to describe the effect of (unresolved) turbulent motions onto the mean (re-solved) field. Since all turbulent motions are modelled, RANS turbulence models tend to be uncertain and case-dependent. Quantifying and reducing such uncertainties is then of the utmost importance for aerodynamics design in general, and specifically for the analysis and optimization of complex turbo machinery flows. In previous work [6], the present authors used Bayesian model averages of RANS models for providing improved predictions of a compressor cascade configuration, alongside with a quantification of confidence intervals associated with modelling uncertainties. Constant weights throughout the field were used. It is however expected, based on theoretical considerations and ex-pert judgment, that different RANS models tend to perform better in some flow regions and less in other regions, and consequently they should be assigned space-varying weights. For this reason, we implement and assess space-dependent averages of RANS models for compressor flow predictions. More precisely the focus is put on two alternative algorithms: (i) a version of CBA adapted to flow variable fields, and (ii) a Space-Dependent Stacking (SDS) method based on Karhunen-Loeve decomposition of the mixture weights. Flow regions are described using selected features, formulated as functions of the mean flow quantities. Given a set of concurrent RANS models and a database of reference flow data corresponding to various operating conditions, the two algorithms are first trained against data, and subsequently applied to the prediction of an unobserved flow, i.e. another operating condition of the compressor cascade. The algorithms assign a probability to each model in each region of the feature space, based on their ability to accurately predict selected Quantities of Interest (QoI) in this region. The space-dependent weighted-average of the RANS models applied to the prediction scenario is used to reconstruct the expected solution and the associated confidence intervals. Preliminary results show that both of the methods generally yield more accurate solutions than the constant-weight BMA method, and provide a valuable estimate of the uncertainty intervals.

 References

[1] David Madigan, Adrian E Raftery, C Volinsky, and J Hoeting. Bayesian model averaging. In Proceedings of the AAAI Workshop on Integrating Multiple Learned Models, Portland, OR, pages 77–83,1996.

[2] Jennifer A. Hoeting, David Madigan, Adrian E. Raftery, and Chris T. Volinsky. Correction to: “bayesian model averaging: a tutorial” [statist. sci. 14 (1999), no. 4, 382–417; mr 2001a:62033]. Statist. Sci., 15(3):193–195, 08 2000.

[3] David H. Wolpert. Stacked generalization. Neural networks, 5(2):241–259, 1992.

[4] Leo Breiman. Stacked regressions. Machine learning, 24(1):49–64, 1996.

[5] Qingzhao Yu, Steven N. MacEachern, and Mario Peruggia. Clustered bayesian model averaging. Bayesian Anal., 8(4):883–908, 12 2013.

[6] M. de Zordo-Banliat, X. Merle, G. Dergham, and P. Cinnella. Bayesian model-scenario averaged predictions of compressor cascade flows under uncertain turbulence models. Computers & Fluids201:104473, 2020.

 

11 Feb. 2021, 13h00: Stephan Rasp (ClimateAI): Hybrid machine learning-physics climate modeling: challenges and potential solutions

Climate models still exhibit large structural errors caused by uncertainties in the representation of sub-grid processes. The recent availability of short-term high-resolution simulations motived the development of machine learning-based sub-grid schemes. Such hybrid models have been shown to have the potential to improve the representation of processes such as convection and radiation but fundamental problems remain, in particular instabilities and model drifts. In this talk I will discuss some potential solutions to the key issues facing hybrid modeling.

 Video

21 Jan. 2021, 16h00: Kelbij Star (SCK CEN/ UGent): Reduced order modeling for incompressible flows with heat transfer and parametric boundary conditions

Complex fluid dynamics problems are usually solved numerically using discretization methods. However, these methods are often unfeasible for applications that require almost in real-time modeling or testing of a large number of different system configurations, for instance for control purposes, sensitivity analyses or uncertainty quantification studies. This has stimulated the development of modeling techniques that reduce the number of degrees of freedom of the high fidelity fluid flow and heat transfer models. Mathematical techniques are used to extract “features” of the high fidelity model in order to replace them by a more simplified model. In that way, the required computational time and computer memory usage is reduced. In this presentation, I will present and discuss some reduced order modeling methods for incompressible flows and heat transfer problems with the focus on the challenge to impose parametric boundary conditions at the reduced order level.

 Video

14 Jan. 2021, 16h00: Jorino van Rhijn (CWI): Mind the Map: Generating Strong Approximations of Financial SDEs with GANs

Generative adversarial networks (GANs) have shown promising results when applied on partial differential equations and financial time series generation. We investigate if GANs can be used to provide a strong approximation to the solution of one-dimensional stochastic differential equations (SDEs) using large time steps. Standard GANs are only able to approximate processes in conditional distribution, yielding a weak approximation to the SDE. A GAN architecture is proposed that enables strong approximation, called the constrained GAN. The discriminator of this GAN is informed with the map between the prior input to the generator and the corresponding output samples. The GAN is trained on a dataset obtained with exact simulation. The discriminator enforces path-wise equality between the exactly simulated training samples and the generator output. The architecture was tested on geometric Brownian motion (GBM) and the Cox Ingersoll Ross (CIR) process, where it was conditioned on a range of time steps and previous values of the asset process. The constrained GAN outperformed discrete-time schemes in strong error on a discretisation with large time steps. It also outperformed the standard conditional GAN when approximating the conditional distribution. We discuss methods to extend the constrained GAN to general one-dimensional Ito SDEs, for which exact simulation may not be available. In future work, the constrained GAN could be conditioned on the SDE parameter set as well, allowing it to learn an entire family of solutions simultaneously.

 Video

7 Jan. 2021, 16h00: Boris Boonstra (CWI): Valuation of electricity storage contracts using the COS method.

In this presentation, we will introduce a numerical pricing technique, the well-known COS method, to solve the dynamic programming formulation of various financial derivatives in the electricity market. In particular, the electricity storage contract is discussed and priced with the COS method. Storage of electricity has become increasingly important, due to the gradual replacement of fossil fuels by more variable and uncertain renewable energy sources. The electricity storage contract is a contract where electricity can be sold/bought at fixed moments in time by trading on the electricity market in order to make a profit while the energy level in the storage changes. For example, an application of the electricity storage contract is to investigate whether a parking lot with electric cars can be utilized to make a profit by trading on the electricity market, by using the batteries of the stationary cars. The main idea of the COS method is to approximate the conditional probability density function via the Fourier cosine expansion and make use of the relation between the coefficients of the Fourier cosine expansion and the characteristic function. The use of the characteristic function is convenient, because the density function is often unknown for relevant asset processes, in contrast to the characteristic function.

 

16 Dec. 2020, 14h30:  Stef Maree (CWI): DroneSurance: Mathematical modelling of pay-as-you-fly drone insurance

Commercial drone usage is rapidly growing, for example for the delivery of parcels, surveillance, and the inspection of dikes, crops, or windmills. DroneSurance is a tool for drone users to purchase insurance only when they need it, and as long as they need it. By incorporating flight-specific information, fairer premiums can be determined compared to traditional approaches. DroneSurance is funded via an EIT digital grant, and is developed together with partners Bright Cape (NL), Achmea (NL), Universidad Politécnica de Madrid (Spain) and EURAPCO (Switzerland). The role of CWI in this project is to build a simple but robust prediction model for drone damage, that initially uses expert knowledge and can gradually incorporate data when policies are being sold and data is gathered. To this end, we have adopted a flexible framework offered by Bayesian Generalised Linear Models (BGLM).

 

10 Dec. 2020, 15h00: Vinit Dighe (CWI): Efficient Bayesian calibration of aeroelastic wind turbine models using surrogate modelling.

A very common approach to study the aerodynamic performances of wind turbines is by employing low-fidelity aeroelastic models. However, in several situations, the accuracy of such models can be unsatisfactory when comparing the results of the model predictions with the experiments. In the past, the calibration of the model parameters was done manually by trial and error approaches, which was inefficient and prone to error. In order to address this issue, a rigorous approach to the calibration problem is proposed by recasting it in a probabilistic setting using a Bayesian framework. To make this framework feasible, the procedure is accelerated through the use of a polynomial chaos expansion based surrogate model. The calibration is finally validated by using the calibrated model parameters to predict the aerodynamic performance of wind turbines in two different experimental setups.

 

 Previous UQ seminar talks:

This seminar is a continuation of the UQ seminar. Presentations available internally on https://oc.cwi.nl.

2020

12 Mar. 2020: Martin Janssens (TU Delft): Machine learning unresolved turbulence in a variational multiscale model

Models for the influence of unresolved processes on resolved scales remain a prominent error source in numerical simulations of engineering and atmospheric flows. In recent years, improvements in the capacity of machine learning algorithms and the increasing availability of high-fidelity datasets have identified data-driven unresolved scales models in general and Artificial Neural Networks (ANNs) in particular as high-potential options to break the deadlock. Yet, early contributions in this field rely on inconsistent multiscale model formulations and are plagued by numerical instability. To sketch a clearer picture on the sources of the accuracy and instability of ANN unresolved scales models, we have developed a framework in which no assumptions on the model form are made. We use ANNs to infer exact projections of the unresolved scales processes on the resolved degrees of freedom. Such "interaction terms" naturally arise from Variational Multiscale Model (VMM) formulations. Our VMM-ANN framework limits error to the data-driven interaction term approximations, offering explicit insight into their functioning. We assess our model in the context of a one-dimensional, forced Burgers’ problem, for a range of simple and realistic forcings. Simple, feedforward ANNs with local input trained on error-free data a priori to inserting them in forward simulations (offline) strongly improve the prediction of the interaction terms of our problem compared to traditional, algebraic VMM closures in offline settings at various levels of discretisation; they also generalise well to uncorrelated instances of our forcing. However, this performance does not translate to simulations of forward problems. The model suffers from instability due to i) ill-posed nonlinear solution procedures and ii) self-inflicted error accumulation. These correspond to two dimensions of forward simulations that are not accounted for by offline training on error-free data. We show that introducing noise to the training data can partly remedy these problems in our simple setting. Yet, we conclude that appreciable challenges remain in order to capitalise on the promise offered by ANNs to improve the unresolved scales modelling of turbulence.

3 Feb. 2020: Akil Narayan (Uni Utah, US): Low-rank algorithms for multi-level and multi-fidelity simulations in uncertainty quantification

High-fidelity computational software is often a trusted simulated-based tool for physics-based prediction of real-world phenomena. The tradeoff of such high-fidelity software in delivering an accurate prediction is substantial computational expense. In such situations performing uncertainty quantification on these simulations, often requiring several queries of the software, is infeasible. Alternatively, low-fidelity simulations are often orders of magnitude less expensive, but yield inaccurate physical predictions. In more complicated cases, one is faced with multiple models with multiple fidelity levels, and must make a decision about where to allocate computational resources. In this talk we explore low-rank strategies for addressing such multi-fidelity problems. While low-fidelity models are of dubious predictive value, they can be of substantial value in terms of quantifying uncertainty through low-rank methods. We explore the practical and mathematical aspects of these approaches, and demonstrate their effectiveness on problems in molecular dynamics, linear elasticity, and topology optimization.

2019

5 Dec. 2019: Yous van Halder (CWI): Multi-level surrogate model construction with convolutional neural networks
In this talk, we will present a novel approach to use convolutional neural networks to improve multi-level Monte Carlo estimators. Next to the classic idea of variance decay upon grid refinement, we employ the idea that the PDE error behavior is similar between consecutive levels when the grid is fine enough. Based on these ideas, we design the following neural network: a convolutional neural network that extracts local error features as latent quantities, a fully connect network to map local errors to global errors, subsequently extended with transfer learning to efficiently learn the error behavior on finer grids. We show promising results on nonlinear PDEs, including the incompressible Navier-Stokes equations.

21 Nov. 2019: Philippe Blondeel (KU Leuven): p-refined Multilevel Quasi-Monte Carlo for Galerkin Finite Element Methods with applications in Civil Engineering
Practical civil engineering problems are often characterized by uncertainty in their material parameters. Discretization of the underlying equations is typically done by means of the Galerkin Finite Element method. The uncertain material parameter can then be expressed as a random field that can be represented by, for example, a Karhunen–Loève expansion. Computation of the stochastic response is very costly, even if state-of-the-art Multilevel Monte Carlo (MLMC) is used. A significant cost reduction can be achieved by using p-refined Multilevel Quasi-Monte Carlo (p-MLQMC). The method is based on the idea of variance reduction by employing a hierarchical discretization of the problem based on a p-refinement scheme. This novel method is then combined with a rank-1 lattice rule yielding faster convergence compared to the method based on random Monte Carlo points. This method is first benchmarked on an academic beam bending problem. Finally, we use our algorithm for the assessment of the stability of slopes, a problem that arises in geotechnical engineering.

7 Nov. 2019: Georgios Pilikos (CWI): Bayesian Machine Learning for Seismic Compressive Sensing
We will introduce Bayesian machine learning for efficient seismic surveys. This will include data-driven models that learn sparse representations (features) of the data and built around the framework of Compressive Sensing. Furthermore, we will show that using these models, it is possible to predict missing receivers' values and simultaneously quantify the uncertainty of these predictions.

24 Oct. 2019: Anna Nikishova (UvA): Sensitivity analysis based dimension reduction of multiscale models
Sensitivity analysis (SA) recognizes the effects of uncertainty in the model inputs to the output parameters. Whenever the variance is a representative measure of model uncertainty, Sobol variance-based method is the preferred approach to identify the main sources of uncertainty. Additionally, SA can be applied to identify ineffectual inputs parameters in order to decrease the model dimensionality by equating such parameters to their mean values.
In this talk, we demonstrate that in some cases SA of a single scale model provides information on the sensitivity of the final multiscale model output. This then can be employed to reduce the dimensionality of the multiscale model input. However, the sensitivity of a single scale model response does not always bound the sensitivity of the multiscale model output. Hence, an analysis of the function defining the relation between single scale components is required to understand whether single scale SA can be used to reduce the dimensionality of the overall multiscale model input space.

26 Sept. 2019: Nikolaj Mucke (CWI): Non-Intrusive Reduced Order Modeling of Nonlinear PDE-Constrained Optimization using Artificial Neural Networks
Nonlinear PDE-constrained optimization problems are computationally very time consuming to solve numerically. Hence, there is much to be gained from replacing the full order model with a reduced order surrogate model. Using conventional methods, such as proper orthogonal decomposition (POD), often yields a such reduced order model but doesn't necessarily cut down computation time for nonlinear problems due to the intrusive nature of the method. As an alternative, artificial neural networks, combined with POD, are here presented as a viable non-intrusive surrogate model that cuts down computation time significantly.

The talk will be divided into three parts: 1) a brief introduction to PDE-constrained optimization and a discussion about why such problems are computationally heavy to solve, 2) an introduction to POD in the context of PDE-constrained optimization, 3) a presentation of how neural networks can be utilized as a surrogate model.

5 Sept. 2019: Kelbij Star (SCK•CEN / Ghent University), POD-Galerkin Reduced Order Model of the Boussinesq approximation for buoyancy-driven flows
A parametric Reduced Order Model (ROM) for buoyancy driven-flows is presented for which the Full Order Model (FOM) is based on the finite volume approximation. To model the buoyancy, a Boussinesq approximation is applied. Therefore, there exists a two-way coupling between the incompressible Boussinesq equations and the energy equation. The ROM is obtained by performing a Galerkin projection of the governing equations onto a reduced basis space that has been constructed using a Proper Orthogonal Decomposition approach. The ROM is tested on a 2D differentially heated cavity, of which the wall temperatures are parametrized using a control function method. Furthermore, the issues and challenges of Reduced Order Modeling, like stability, non-linearity and boundary control, are discussed. Finally, attention will be paid to the training of the ROM and especially for the application of Uncertainty Quantification. 

22 Aug. 2019: Hemaditya Malla (CWI - TU/e), Response-based quadrature rules for uncertainty quantification
Forward propagation problems in uncertainty quantification using polynomial-based surrogates involve the numerical approximation of integrals using quadrature rules. Such numerical approximations require function evaluations that are often costly. Most quadrature rules in literature are constructed so as to be able to exactly integrate polynomials upto a given degree. The accuracy of such quadrature rules depends on the smoothness of the function being integrated. In order to integrate functions that lack smoothness this approach requires a large number of function evaluations to reach a certain accuracy. In this talk, an algorithm is introduced that generates quadrature rules of higher accuracy to integrate such types of functions that requires less function evaluations.