Seminar for machine learning and UQ in scientific computing

Seminar on machine learning and uncertainty quantification for scientific computing, organized by the scientific computing group at CWI.

This seminar is organized by the Scientific Computing group of CWI Amsterdam. The focus is on the application of Machine Learning (ML)  and Uncertainty Quantification in scientific computing. Topics of interest include, among others: 

  • combination of data-driven models and (multi scale) simulations
  • new ML architectures suited for scientific computing or UQ,   
  • incorporation of (physical) constraints into data-driven models,   
  • efficient (online) learning strategies,   
  • using ML for dimension reduction / creating surrogates,   
  • inverse problems using ML surrogates, 

and any other topic in which some form of ML and/or UQ is used to enhance (existing) scientific computing methodologies. All applications are welcome, be it financial, physical, biological or otherwise.

For more information, or if you'd like to attend one of the talks, please contact Wouter Edeling of the SC group.

Schedule upcoming talks

April

30 April 2024 11h00 CET: Marius Kurz (Centrum Wiskunde & Informatica): Learning to Flow: Machine Learning and Exascale Computing for Next-Generation Fluid Dynamics

The computational sciences have become an essential driver for understanding the dynamics of complex, nonlinear systems ranging from the dynamics of the earth’s climate to obtaining information about a
patient’s characteristic blood flow to derive personalized approaches in medical therapy. These advances can be ascribed on one hand to the exponential increase in available computing power, which has allowed the simulation of increasingly large and complex problems and has led to the emerging generation of exascale systems in high-performance computing (HPC). On the other hand, methodological advances in discretization methods and the modeling of turbulent flow have increased the fidelity of simulations in fluid dynamics significantly. Here, the recent advances in machine learning (ML) have opened a whole field of novel, promising modeling approaches.


This talk will first introduce the potential of GPU-based simulation codes in terms of energy-to-solution using the novel GALÆXI code. Next, the integration of machine learning methods for large eddy simulation
will be discussed with emphasis on their a posteriori performance, the stability of the simulation, and the interaction between the turbulence model and the discretization. Based on this, Relexi is introduced as a
potent tool that allows employing HPC simulations as training environments for reinforcement learning models at scale and thus to converge HPC and ML.

May

23 May 2024 11h00 CET: Frans van der Meer (Delft University of Technology) Multiscale modeling of composite materials with data-driven surrogates

For computational modeling of the mechanical performance of advanced engineering materials like fiber reinforced composites multiple levels of observations are relevant. For composite laminates, high-fidelity
models have been developed for the mesoscale where individual layers in the laminate are modeled as homogeneous orthotropic material and the microscale where individual fibers embedded in the polymer matrix are modeled explicitly. There is a long-standing vision to couple these scales in concurrent multiscale analysis. However, the coupled multiscale approach comes with excessive computational cost due to the many times the microscopic problem needs to be evaluated. Redundancy in the computational effort associated with the many evaluations of the micromodel potentially offers a way out, namely by creating data-driven surrogates, where the idea is that a smaller number of evaluations could be used to train a surrogate for the micromodel that can give accurate predictions at lower computational cost. However, the path-dependence of the material behavior renders the dimensionality of the training space unbounded, as a consequence of which data-driven surrogates do not generalize well even after being trained on a large amount of data. In this presentation, a short overview of strategies for modeling composite materials at multiple scales is given, after which two approaches are presented that aim at limiting the training burden for data-driven surrogates in the context of multiscale modeling. Firstly, an active learning approach with a Gaussian Process based surrogate, where we exploit the fact that in any given multiscale simulation only a small subspace of the complete microscopic input space is explored. And secondly, a physically recurrent neural network, where we embed classical constitutive models in a neural network to incorporate physics-based memory, allowing for superior generalization.

27 May 2024 11h00 CET: Beatriz Moya (CNRS@CREATE) Exploring the role of geometric and learning biases in Model Order Reduction and Data-Driven simulation

This talk highlights the practical application and synergistic use of geometric and learning biases in interpretable and consistent deep learning for complex problems. We propose the use of Geometric Deep Learning for Model Order Reduction. Its high generalizability, even with limited data, facilitates real-time evaluation of partial differential equations (PDEs) for complex behaviors and changing domains. Additionally, we showcase the application of Thermodynamics-Informed Machine Learning as an alternative when the physics of the system under study is not fully known. This algorithm results in a cognitive digital twin capable of self-correction for adapting to changing environments when only partial evaluations of the dynamical state are available. Finally, the integration of Geometric Deep Learning and Thermodynamics-Informed Machine Learning produces an enhanced combined effect with high applicability in real-world domains.

June

20 June 2024 10h00 CET: Francesca Bartolucci (Delft University of Technology)

Previous talks 2024

We consider neural network solvers for differential equation-based problems based on the pioneering collocation approach introduced by Lagaris et al. Those methods are very versatile as they may not require an explicit mesh, allow for the solution of parameter identification problems, and be well-suited for high- dimensional problems. However, the training of these neural network models is generally not very robust and may require a lot of hyper parameter tuning. In particular, due to the so-called spectral bias, the training is notoriously difficult when scaling up to large computational domains as well as for multiscale problems. In this work, we give an overview of the methods from the literature and we focus later on two overlapping domain decomposition-based techniques, namely finite basis physics-informed neural networks (FBPINNs) and deep domain decomposition (Deep-DDM) methods. Whereas the former introduces the domain decomposition via a partition of unity within the classical gradient- based optimization, the latter employs a classical outer Schwarz iteration. In order to obtain scalability and robustness for multiscale problems, we consider a multi-level framework for both approaches.

This talk focuses on the possibilities that arise from recent advances in the area of deep learning for physics simulations. In particular, it will focus on diffusion modeling and numerical solvers which are differentiable. These solvers provide crucial information for deep learning tasks in the form of gradients, which are especially important for time-dependent processes. Also, existing numerical methods for efficient solvers can be leveraged within learning tasks. This paves the way for hybrid solvers in which traditional methods work alongside pre-trained neural network components. In this context, diffusion models and score matching will be discussed as powerful building blocks for training probabilistic surrogates. The capabilities of the resulting methods will be illustrated with examples such as wake flows and turbulent flow cases.

Previous talks 2023

In this talk, I will present an overview of recent works aiming at solving inverse problems (state and parameter estimation) by combining optimally measurement observations and parametrized PDE models. After defining a notion of optimal performance in terms of the smallest possible reconstruction error that any reconstruction algorithm can achieve, I will present practical numerical algorithms based on nonlinear reduced models for which we can prove that they can deliver a performance close to optimal. The proposed concepts may be viewed as exploring alternatives to Bayesian inversion in favor of more deterministic notions of accuracy quantification. I will illustrate the performance of the approach on simple benchmark examples and we will also discuss applications of the methodology to biomedical problems
which are challenging due to shape variability.

Stochasticity has been employed systematically in geophysical fluid dynamics (GFD) to model uncertainty. Geophysical flows are typically dominated by advection effects and contain a family of conserved quantities, of which energy and enstrophy are considered most important. Stochastic advection by Lie transport (SALT), which is a data-driven enstrophy-preserving transport noise, can be used to quantify uncertainty in these models. The first part of the presentation illustrates how SALT can be used efficiently to quantify uncertainty due to unresolved dynamics. However, this approach seems insufficient in the presence of discretization error. To counteract the effects of coarsening, we apply a simple data-driven stochastic subgrid-scale parametrization inspired by data assimilation algorithms. In the second part of the presentation, we show that the proposed parametrization recovers measured reference kinetic energy spectra in coarse numerical simulations.

In this presentation we show results for two types of AI models. The first is an approach to modelling CFD using AI libraries and conventional discretisation methods and the second using AI surrogates that have the ability to model unseen situations.

We show demonstrations for fluid flow problems and radiation transport problems including urban flows, indoor flows, multi-phase flows, nuclear reactors. We also show how AI can be used for data assimilation, uncertainty quantification and control using 4DVar, novel autoencoder methods and generative methods. We also discuss the use of graph neural network in modelling.

Jet engines, scramjets, rockets, and solar energy receivers have been the focus of a sequence of computational projects at Stanford University. Featuring a combination of computer science and multi-physics turbulence simulations, the research is funded by Department of Energy. within the Advanced Simulation and Computing (ASC) Program. A common theme of the applications above is the coupled nature of the physical processes involving turbulent transport, combustion, radiation, compressible fluid mechanics, multiphase flow phenomena, etc. The research portfolio includes the development of the engineering models and software tools required for the simulations of the overarching applications, and also innovations in high performance computing and machine learning aimed at enhancing  simulation speed. To build confidence and improve the prediction accuracy of such simulations, the impact of uncertainties on the quantities of interest must also be measured.

Multifidelity uncertainty quantification methods use hierarchies of generalized numerical resolutions and model fidelities to obtain accurate quantitative estimates of the prediction accuracy without requiring many detailed calculations. This talk will trace back the history of the four projects at Stanford and how the initial efforts targeting demonstrations on the fastest supercomputer in 2000 (ASCI White) have evolved to enable the present ensemble simulations on today’s exascale class machines (Frontier and Aurora).  

Over many years Xavier (UPC Barcelona) has performed many very impressive DNS and LES simulations of turbulent fluid flows, including problems involving natural convection problems (Rayleigh-Bénard). This promises to be an exciting presentation with deep insights in numerics as well as many colourful results.

Previous talks 2022

Dimensional analysis is a robust technique for extracting insights and finding symmetries in physical systems, especially when the governing equations are not known. The Buckingham Pi theorem provides a procedure for finding a set of dimensionless groups from given parameters and variables. However, this set is often non-unique which makes dimensional analysis an art that requires experience with the problem at hand. In this talk, I'll propose a data-driven approach that takes advantage of the symmetric and self-similar structure of available measurement data to discover dimensionless groups that best collapse the data to a lower dimensional space according to an optimal fit. We develop three machine learning methods that use the Buckingham Pi theorem as a constraint: (i) a constrained optimization problem with a nonparametric function, (ii) a deep learning algorithm (BuckiNet) that projects the input parameter space to a lower dimension in the first layer, and (iii) a sparse identification of differential equations method to discover differential equations with dimensionless coefficients that parameterize the dynamics. I discuss the accuracy and robustness of these methods when applied to known nonlinear systems where dimensionless groups are known, and propose a few avenues for future research.

Joseph Bakarji (University of Washington)

Large Eddy Simulations (LES) are of increasing interest for turbomachinery design since they provide a more reliable prediction of flow physics and component behaviour than standard RANS simulations. However, they remain prohibitively expensive at high Reynolds numbers or realistic geometries. Most of the cost is associated with the resolution of the boundary layer, and therefore, to save computational resources, wall-modeled LES (wmLES) has become a valuable tool. Most existing analytical wall models assume the boundary layer to be fully turbulent, attached, flow aligned, and near-equilibrium. However, these assumptions no longer hold for complex flow patterns that frequently
occur in turbomachinery passages (i.e., misalignment, separation, …). Although relevant progress has been made in recent years, wall models have not always brought a clear benefit for such realistic flows. This work is a first step in addressing more complex flow features, such as separation, in the development of wall models. This paper proposes an innovative data-driven wall model for the treatment of separated flows.

Among the many possibilities to solve the complex regression problem (i.e., predicting the wall shear stress from instantaneous flow data and the geometry), deep neural networks have been selected for their universal approximation capabilities (K. Hornik. “Approximation capabilities of multilayer feedforward networks,” Neural Networks, 4(2):251–257, 1991). In the present framework, the two-dimensional periodic hill problem is selected as a reference test case featuring the separation of a fully turbulent boundary layer. Gaussian Mixture Neural networks (GMN) and Convolutional Neural Networks (CNN) combined with a self-attention layer (Vaswani et al., “Attention Is All You Need,” 31st Conference on Neural Information Processing Systems, NIPS 2017, Long Beach, CA, USA) are trained to predict the two components of the wall shear stress using instantaneous flow quantities and geometric parameters. The input stencil is obtained by analysing space-time correlations (M. Boxho et al., “Analysis of space-time correlations for the two-dimensional periodic hill problem to support the development of wall models,” ETMM13, 2021). Once the model is trained on different databases (flow on the two walls of the periodic hill, flow on a channel at 𝑅𝑒𝜏 = 950), the model is implemented in a high-order Discontinuous Galerkin (DG) flow solver. The a priori and a posteriori validation of the data-driven wall models on the periodic hill problem will be presented. A discussion on the generalizability of the data-driven wall model will be also reported.

Conventional reduced order models (ROMs) anchored to the assumption of modal linear superimposition, such as proper orthogonal decomposition (POD), may reveal inefficient when dealing with nonlinear time-dependent parametrized PDEs, especially for problems featuring coherent structures propagating over time. To enhance ROM efficiency, we propose a nonlinear approach to set ROMs by exploiting deep learning (DL) algorithms, such as convolutional neural networks. In the resulting DL-ROM, both the nonlinear trial manifold and the nonlinear reduced dynamics are learned in a non-intrusive way by relying on DL algorithms trained on a set of full order model (FOM) snapshots, obtained for different parameter values. Performing then a former dimensionality reduction on FOM snapshots through POD enables, when dealing with large-scale FOMs, to speedup training times, and decrease the network complexity, substantially. Accuracy and efficiency of the DL-ROM technique are assessed on different parametrized PDE problems in cardiac electrophysiology, computational mechanics and fluid dynamics, possibly accounting for fluid-structure interaction (FSI) effects, where new queries to the DL-ROM can be computed in real-time. Moreover, numerical results obtained by the application of DL-ROMs to the solution of an industrial application, i.e. the approximation of the structural or the electromechanical behaviour of Micro-Electro-Mechanical Systems (MEMS), will be shown.

Mathematical and computational modeling of the cardiovascular system is increasingly giving alternatives to traditional invasive clinical procedures, and allowing for richer diagnostic metrics. In blood flows, the personalization of the models -- based on geometrically multi-scale fluid flows and fluid-solid interaction "combined" with medical images -- rely on formulating and solving appropriate inverse problems. In this talk, we will focus on the challenges and opportunities appearing when using data coming from magnetic resonance imaging for measuring both anatomy and function of the vasculature.

Cristóbal Bertoglio (Bernoulli Institute - U Groningen)

In this talk we will consider reduced basis methods (RBM) for the model order reduction of parametric Hamiltonian dynamical systems describing nondissipative phenomena. The development of RBM for Hamiltonian systems is challenged by two main factors: (i) failing to preserve the geometric structure encoding the physical properties of the dynamics, such as invariants of motion or symmetries, might lead to instabilities and unphysical behaviors of the resulting approximate solutions; (ii) the local low-rank nature of transport-dominated and nondissipative phenomena demands large reduced spaces to achieve sufficiently accurate approximations. We will discuss how to address these aspects via a structure-preserving nonlinear reduced basis approach based on dynamical low-rank approximation. The gist of the proposed method is to evolve low-dimensional surrogate models on a phase space that adapts in time while being endowed with the geometric structure of the full model. If time permits, we will also discuss a rank-adaptive extension of the proposed method where the dimension of the reduced space can change during the time evolution.

Data assimilation is broadly used in many practical situations, such as weather forecasting, oceanography and subsurface modelling. There are some challenges in studying these physical systems. For example, their state cannot be directly and accurately observed or the underlying time-dependent system is chaotic which means that small changes in initial conditions can lead to large changes in prediction accuracy. The aim of data assimilation is to correct error in the state estimation by incorporating information from measurements into the mathematical model. The widely-used data-assimilation methods are variational methods. They aim at finding an optimal initial condition of the dynamical model such that the distance to observations is minimized (under a constraint of the estimate being a solution of the dynamical system). The problem is formulated as a minimization of a nonlinear least-square problem with respect to initial condition, and it is usually solved using a Gauss-Newton method. We propose a variational data-assimilation method that minimizes a nonlinear least-square problem as well but with respect to a trajectory over a time window at once. The goal is to obtain a more accurate estimate. We prove method convergence in case of noise-free observations and provide error bound in case of noisy observations. We confirm our theoretical results with numerical experiments using Lorenz models.

Nazanin Abedini (Vrije Universiteit Amsterdam

We consider the point evaluation of the solution to interface problems with geometric uncertainties, where the uncertainty in the obstacle is described by a high-dimensional parameter, as a prototypical example of non-smooth dependence of a quantity of interest on the parameter. We focus in particular on an elliptic interface problem and a Helmholtz transmission problem. The non-smooth parameter dependence poses a challenge when one is interested in building surrogates. In this talk we propose to use deep neural networks for this purpose. We provide a theoretical justification for why we expect neural networks to provide good surrogates. Furthermore, we present numerical experiments showing their good performance in practice. We observe in particular that neural networks do not suffer from the curse of dimensionality, and we study the dependence of the error on the number of point evaluations (which coincides with the number of discontinuities in the parameter space), as well as on several modeling parameters, such as the contrast between the two materials and, for the Helmholtz transmission problem, the wavenumber.

Laura Scarabosio (Radboud University)

Previous talks 2021

I will start with an overview of the research activities carried out by the Predictive Science Laboratory (PSL) at Purdue. In particular, I will use our work at the Resilient Extra-Terrestrial Habitats Institute (NASA) to motivate the need for physics-informed neural networks (PINNs) for high-dimensional uncertainty quantification (UQ), automated discovery of physical laws, and complex planning. The current state of these three problems ranges from manageable to challenging to open, respectively. The rest of the talk will focus on PINNs for high-dimensional UQ and, in particular, on stochastic PDEs. I will argue that for such problems, the squared integrated residual is not always the right choice. Using a stochastic elliptic PDE, I will derive a suitable variational loss function by extending the Dirichlet principle. This loss function exhibits (in the appropriate Hilbert space) a unique minimum that provably solves the desired stochastic PDE. Then, I will show how one can parameterize the solution using DNNs and construct a stochastic gradient descent algorithm that converges. Subsequently, I will present numerical evidence illustrating this approach's benefits to the squared integrated residual, and I will highlight its capabilities and limitations, including some of the remaining open problems.

Video

Ilias Bilionis (School of Mechanical Engineering, Purdue University)

Detecting causal relationships and physically meaningful patterns from the complex climate system is an important but challenging problem. In
my presentation I will show recent progress for both problems using Machine Learning approaches. First, I will show that Reservoir Computing is able to systematically identify causal relationships between variables. I will show evidence that Reservoir Computing is able to systematically identify the causal direction, coupling delay, and causal chain relations from time series. Reservoir Computing Causality has three advantages: (i) robustness to noisy time series; (ii) computational efficiency; and (iii) seamless causal inference from high-dimensional data. Second, I will  demonstrate that Multi-Resolution Dynamic Mode Decomposition can systematically identify physically meaningful patterns in high-dimensional climate data. In particular, Multi-resolution Dynamic Mode Decomposition is able to extract the changing annual cycle.

Computational models, a.k.a. simulators, are used in all fields of engineering and applied sciences to help design and assess complex systems in silico. Advanced analyses such as optimization or uncertainty quantification, which require repeated runs by varying input parameters, cannot be carried out with brute force methods such as Monte Carlo simulation due to computational costs. Thus the recent development of surrogate models such as polynomial chaos expansions and Gaussian processes, among others. For so-called stochastic simulators used e.g. in epidemiology, mathematical finance or wind turbine design, an intrinsic source of stochasticity exists on top of well-identified system parameters. As a consequence, for a given vector of inputs, repeated runs of the simulator (called replications) will provide different results, as opposed to the case of deterministic simulators. Consequently, for each single input, the response is a random variable to be characterized.
In this talk we present an overview of the literature devoted to building surrogate models of such simulators, which we call stochastic emulators. Then we focus on a recent approach based on generalized lambda distributions and polynomial chaos expansions. The approach can be used with or without replications, which brings efficiency and versatility. As an outlook, practical applications to sensitivity analysis will also be presented.
Acknowledgments: This work is carried out together with Xujia Zhu, a PhD. student supported by the Swiss National Science Foundation under Grant Number #175524 “SurrogAte Modelling for stOchastic Simulators (SAMOS)”.

Video

Bruno Sudret (ETH Zürich, chair Risk Safety and Uncertainty Quantification)

Atmospheric models used for weather and climate prediction are traditionally formulated in a deterministic manner. In other words, given a particular state of the resolved scale variables, the most likely forcing from the sub-grid scale motion is estimated and used to predict the evolution of the large-scale flow. However, the lack of scale-separation in the atmosphere means that this approach is a large source of error in forecasts. Over the last decade an alternative paradigm has developed: the use of stochastic techniques to characterise uncertainty in small-scale processes. These techniques are now widely used across weather, seasonal forecasting, and climate timescales.

While there has been significant progress in emulating parametrisation schemes using machine learning, the focus has been entirely on deterministic parametrisations. In this presentation I will discuss data driven approaches for stochastic parametrisation. I will describe experiments which develop a stochastic parametrisation using the generative adversarial network (GAN) machine learning framework for a simple atmospheric model. I will conclude by discussing the potential for this approach in complex weather and climate prediction models.

Video

Hannah Christensen (Oxford)

In the talk, I will discuss a general closure framework to compensate for the model error arising from missing dynamical systems. The proposed framework reformulates the model error problem into a supervised learning task to estimate a very high-dimensional closure model, deduced from the Mori-Zwanzig representation of a projected dynamical system with projection operator chosen based on Takens embedding theory. Besides theoretical convergence, this connection provides a systematic framework for closure modeling using available machine learning algorithms. I will demonstrate numerical results using a kernel-based linear estimator as well as neural network-based nonlinear estimators. If time permits, I will also discuss error bounds and mathematical conditions that allow for the estimated model to reproduce the underlying stationary statistics, such as one-point statistical moments and auto-correlation functions, in the context of learning Ito diffusions.

Video

John Harlim (Penn state)

The unique approximation properties of deep architectures have attracted attention in recent years as a foundation for data-driven modeling in scientific machine learning (SciML) applications. The "black-box" nature of DNNs however require large amounts of data that generalize poorly in traditional engineering settings where available data is relatively small, and it is generally difficult to provide a priori guarantees about the accuracy and stability of extracted models. We adopt the perspective that tools from mimetic discretization of PDEs may be adapted to SciML settings, developing architectures and fast optimizers tailored to the specific needs of SciML. In particular, we focus on: realizing convergence competitive with FEM, preserving topological structure fundamental to conservation and multiphysics, and providing stability guarantees. In this talk we introduce some motivating applications at Sandia spanning shock magnetohydrodynamics and semiconductor physics before providing an overview of the mathematics underpinning these efforts.

Nathaniel Trask (Sandia)

The one-dimensional two-fluid model (TFM) is a simplified model for multiphase flow in pipes. It is derived from a spatial averaging process, which introduces a closure problem concerning the wall and interface friction terms, similar to the closure problem in turbulence. To tackle this closure problem, we have approximated the friction terms by neural networks trained on data from DNS simulations.

Besides the closure problem, the two-fluid model has a long-standing stability issue: it is only conditionally well-posed. In order to tackle this issue, we analyze the underlying structure of the TFM in terms of the energy behavior. We show the new result that energy is an inherent 'secondary' conserved property of the mass and momentum conservation equations of the model. Furthermore, we develop a new spatial discretization that exactly conserves this energy in simulations.

The importance of structure preservation, and the core of our analysis, is not limited to the TFM. Neural networks that approximate physical systems can also be designed to preserve the underlying structure of a PDE. In this way, physics-informed machine learning can yield more physical results.

A novel multi-level method for partial differential equations with uncertain parameters is proposed. The principle behind the method is that the error between grid levels in multi-level methods has a spatial structure that is by good approximation independent of the actual grid level. Our method learns this structure by employing a sequence of convolutional neural networks, that are well-suited to automatically detect local error features as latent quantities of the solution. Furthermore, by using the concept of transfer learning, the information of coarse grid levels is reused on fine grid levels in order to minimize the required number of samples on fine levels. The method outperforms state-of-the-art multi-level methods, especially in the case when complex PDEs (such as single-phase and free-surface flow problems) are concerned, or when high accuracy is required.

Predictions of systems described by multiple alternative models is of importance for many applications in science and engineering, namely when it is not possible to identify a model that significantly outperforms every other model for each criterion. Two mathematical approaches tackling this is-sue are Bayesian Model Averaging (BMA) [1, 2], which builds an average of the concurrent models weighted by their marginal posteriors probabilities, and Stacking [3, 4], where the unknown prediction is projected on a basis of alternative models, with weights to be learned from data. In both approaches, the weights are generally constant throughout the domain. More recently, Yu et al. [5] have proposed the Clustered Bayesian Averaging (CBA) algorithm, which leverages an ensemble of Regression Trees (RT) to infer weights as space-dependent functions. Similarly, we propose a Space-Dependent Stacking (SDS) algorithm which modifies the stacking formalism to include space-dependent weights, based on a spatial decomposition method.

In this work, the above-mentioned methods are investigated in a Computational Fluid Dynamics(CFD) context. Specifically, CFD of engineering systems often relies on Reynolds-Averaged Navier-Stokes (RANS) models to describe the effect of (unresolved) turbulent motions onto the mean (re-solved) field. Since all turbulent motions are modelled, RANS turbulence models tend to be uncertain and case-dependent. Quantifying and reducing such uncertainties is then of the utmost importance for aerodynamics design in general, and specifically for the analysis and optimization of complex turbo machinery flows. In previous work [6], the present authors used Bayesian model averages of RANS models for providing improved predictions of a compressor cascade configuration, alongside with a quantification of confidence intervals associated with modelling uncertainties. Constant weights throughout the field were used. It is however expected, based on theoretical considerations and ex-pert judgment, that different RANS models tend to perform better in some flow regions and less in other regions, and consequently they should be assigned space-varying weights. For this reason, we implement and assess space-dependent averages of RANS models for compressor flow predictions. More precisely the focus is put on two alternative algorithms: (i) a version of CBA adapted to flow variable fields, and (ii) a Space-Dependent Stacking (SDS) method based on Karhunen-Loeve decomposition of the mixture weights. Flow regions are described using selected features, formulated as functions of the mean flow quantities. Given a set of concurrent RANS models and a database of reference flow data corresponding to various operating conditions, the two algorithms are first trained against data, and subsequently applied to the prediction of an unobserved flow, i.e. another operating condition of the compressor cascade. The algorithms assign a probability to each model in each region of the feature space, based on their ability to accurately predict selected Quantities of Interest (QoI) in this region. The space-dependent weighted-average of the RANS models applied to the prediction scenario is used to reconstruct the expected solution and the associated confidence intervals. Preliminary results show that both of the methods generally yield more accurate solutions than the constant-weight BMA method, and provide a valuable estimate of the uncertainty intervals.

References

[1] David Madigan, Adrian E Raftery, C Volinsky, and J Hoeting. 

Maximilien de Zordo-Banliat (Safran TechDynfluid Laboratory)