This seminar is organized by the Scientific Computing group of CWI Amsterdam. The focus is on the application of Machine Learning (ML) and Uncertainty Quantification in scientific computing. Topics of interest include, among others:
- combination of data-driven models and (multi scale) simulations
- new ML architectures suited for scientific computing or UQ,
- incorporation of (physical) constraints into data-driven models,
- efficient (online) learning strategies,
- using ML for dimension reduction / creating surrogates,
- inverse problems using ML surrogates,
and any other topic in which some form of ML and/or UQ is used to enhance (existing) scientific computing methodologies. All applications are welcome, be it financial, physical, biological or otherwise.
For more information, or if you'd like to attend one of the talks, please contact Wouter Edeling of the SC group.
Schedule upcoming talks
April
30 April 2024 11h00 CET: Marius Kurz (Centrum Wiskunde & Informatica): Learning to Flow: Machine Learning and Exascale Computing for Next-Generation Fluid Dynamics
The computational sciences have become an essential driver for understanding the dynamics of complex, nonlinear systems ranging from the dynamics of the earth’s climate to obtaining information about a
patient’s characteristic blood flow to derive personalized approaches in medical therapy. These advances can be ascribed on one hand to the exponential increase in available computing power, which has allowed the simulation of increasingly large and complex problems and has led to the emerging generation of exascale systems in high-performance computing (HPC). On the other hand, methodological advances in discretization methods and the modeling of turbulent flow have increased the fidelity of simulations in fluid dynamics significantly. Here, the recent advances in machine learning (ML) have opened a whole field of novel, promising modeling approaches.
This talk will first introduce the potential of GPU-based simulation codes in terms of energy-to-solution using the novel GALÆXI code. Next, the integration of machine learning methods for large eddy simulation
will be discussed with emphasis on their a posteriori performance, the stability of the simulation, and the interaction between the turbulence model and the discretization. Based on this, Relexi is introduced as a
potent tool that allows employing HPC simulations as training environments for reinforcement learning models at scale and thus to converge HPC and ML.
May
23 May 2024 11h00 CET: Frans van der Meer (Delft University of Technology) Multiscale modeling of composite materials with data-driven surrogates
For computational modeling of the mechanical performance of advanced engineering materials like fiber reinforced composites multiple levels of observations are relevant. For composite laminates, high-fidelity
models have been developed for the mesoscale where individual layers in the laminate are modeled as homogeneous orthotropic material and the microscale where individual fibers embedded in the polymer matrix are modeled explicitly. There is a long-standing vision to couple these scales in concurrent multiscale analysis. However, the coupled multiscale approach comes with excessive computational cost due to the many times the microscopic problem needs to be evaluated. Redundancy in the computational effort associated with the many evaluations of the micromodel potentially offers a way out, namely by creating data-driven surrogates, where the idea is that a smaller number of evaluations could be used to train a surrogate for the micromodel that can give accurate predictions at lower computational cost. However, the path-dependence of the material behavior renders the dimensionality of the training space unbounded, as a consequence of which data-driven surrogates do not generalize well even after being trained on a large amount of data. In this presentation, a short overview of strategies for modeling composite materials at multiple scales is given, after which two approaches are presented that aim at limiting the training burden for data-driven surrogates in the context of multiscale modeling. Firstly, an active learning approach with a Gaussian Process based surrogate, where we exploit the fact that in any given multiscale simulation only a small subspace of the complete microscopic input space is explored. And secondly, a physically recurrent neural network, where we embed classical constitutive models in a neural network to incorporate physics-based memory, allowing for superior generalization.
27 May 2024 11h00 CET: Beatriz Moya (CNRS@CREATE) Exploring the role of geometric and learning biases in Model Order Reduction and Data-Driven simulation
This talk highlights the practical application and synergistic use of geometric and learning biases in interpretable and consistent deep learning for complex problems. We propose the use of Geometric Deep Learning for Model Order Reduction. Its high generalizability, even with limited data, facilitates real-time evaluation of partial differential equations (PDEs) for complex behaviors and changing domains. Additionally, we showcase the application of Thermodynamics-Informed Machine Learning as an alternative when the physics of the system under study is not fully known. This algorithm results in a cognitive digital twin capable of self-correction for adapting to changing environments when only partial evaluations of the dynamical state are available. Finally, the integration of Geometric Deep Learning and Thermodynamics-Informed Machine Learning produces an enhanced combined effect with high applicability in real-world domains.
June
20 June 2024 10h00 CET: Francesca Bartolucci (Delft University of Technology)