SEMINAR++ Scientific Machine Learning (Semester Programme)

In October, November and December, renowned international researchers will visit CWI for several days in order to present, discuss and interact on open problems in scientific machine learning with the Dutch community.

For more information, contact Wouter Edeling.

Speakers

Heng Xiao, Professor of Data-Driven Fluid Dynamics, University of Stuttgart.

An Equivariant Neural Operator for Developing Nonlocal Tensorial Constitutive Models

Developing robust constitutive (or closure) models is a fundamental problem for accelerating the simulation of complicated physics such as turbulent flows.  Traditional constitutive models based on partial differential equations (PDEs) often lack robustness and are too rigid to accommodate diverse calibration datasets. We propose a frame-independent, nonlocal constitutive model based on a vector-cloud neural network that can be learned from unstructured data. The model predicts the closure variable at a point based on a collection of neighboring points (referred to as a “cloud"). The cloud is mapped to the closure variable through a neural network that is invariant both to coordinate translation and rotation and to the ordering of points in the cloud.  The merits of the proposed network are demonstrated for scalar and tensor transport PDEs on a family of parameterized periodic hill geometries. The vector-cloud neural network is a promising tool not only as nonlocal constitutive models and but also as general surrogate models for PDEs on irregular domains.

Paola Cinnella, Professor, Institut Jean Le Rond D'Alembert at Sorbonne University.

Data-driven correction and uncertainty quantification of turbulence models using Bayesian learning and multi-model ensembles

Reynolds-averaged Navier-Stokes (RANS) models of turbulent flow are the cornerstone of flow analysis and design in fluids engineering, despite several inherent limitations that prevent them from capturing the correct physics of flows even in simple configurations. Instead, these models are developed and tuned to match certain quantities of interest to the engineer while providing reasonable performance over a wide range of flow situations.
On the other hand, the increased availability of high-fidelity data from both advanced numerical simulations and flow experiments has fostered the development of a multitude of “data-driven” turbulence model based on data-assimilation, Bayesian calibration, as well as machine learning techniques. Although these models can provide significantly better results over classical models for the narrow class of flows for which they are trained, their generalization capabilities remain far inferior to those of classical models, while the computational cost of model training and validation is significant.

In this talk I will present a methodology for developing data-driven models with improved generalization capabilities while delivering estimates of the predictive uncertainty.
This included a sparse Bayesian learning algorithm for the symbolic identification of stochastic turbulence model corrections and multi-model ensemble techniques to formulate robust predictions of unseen flow cases. The approach is demonstrated for flow cases from the NASA turbulence model testing challenge.

Björn List, PhD candidate, Technical University of Munich.

Learning PDE Simulators for Applica7ons in Turbulence Modeling

Björn List, Li-Wei Chen, and Nils Thuerey

School of Computation, Information and Technology, Technical University of Munich, D-85748 Garching, Germany bjoern.list@tum.de, https://www.ge.in.tum.de/

We depend on mathematical models, particularly partial differential equations (PDEs), to grasp the dynamics of physical systems. Predicting their behavior entails the challenging and resource-intensive process of solving these PDEs. In the context of fluid mechanics, the high computational costs of simulating these chaotic dynamics have prompted the exploration of turbulence modeling. As an alternative to classical approaches, the integration of machine learning techniques offers a novel perspective for deriving efficient solvers. This talk aims to delve into the training strategies employed in machine learning augmented simulators. Special emphasis is placed on unsteady fluid scenarios, where inferring long trajectories renders challenges with respect to the long-term accuracy and stability of the learned solver component. To this end, two training modalities are explored: loss functions tailored to turbulence modeling and trajectory unrolling. To study the effects of trajectory unrolling, we include the less widely used variant of unrolling without temporal gradients in addition to the commonly used one-step setups and fully differentiable unrolling. Our results demonstrate how unrolling optimization trajectories at training time is essential to obtaining accurate models. Comparing networks trained based on differentiable and non-differentiable unrolling, as well as one- step approaches enables us to explore the effects of unrolling on data shift and long-term gradients. These insights are complemented with a study on loss functions to yield long-term stable turbulence models for a variety of two-dimensional flow scenarios.

Giovanni Stabile, Assistant professor at University of Urbino.

From linear to nonlinear model order reduction: some results and perspectives

Non-affine parametric dependencies, nonlinearities, and advection-dominated regimes of the model of interest can result in a slow Kolmogorov n-width decay, which precludes the realization of efficient reduced-order models based on Proper Orthogonal Decomposition. Among the possible solutions, there are purely data-driven methods that leverage nonlinear approximation techniques such as autoencoders and their variants to learn a latent representation of the dynamical system, and then evolve it in time with another architecture. Despite their success in many applications where standard linear techniques fail, more has to be done to increase the interpretability of the results, especially outside the training range and not in regimes characterized by an abundance of data.
Not to mention that none of the knowledge on the physics of the model is exploited during the predictive phase. In this talk, in order to overcome these weaknesses, I introduce a variant of the nonlinear manifold method introduced in previous works with hyper-reduction achieved through reduced over-collocation and teacher-student training of a reduced decoder. We test the methodology on different problems with increasing complexity.