M2P 2023

Interpretable Compression of Fluid Flows Using Graph Neural Networks

  • Barwey, Shivam (Argonne National Laboratory)
  • Maulik, Romit (Argonne National Laboratory)

Please login to view abstract download link

Reduced-order modeling (ROM) strategies based on standard neural network architectures have been shown to accelerate traditional computational fluid dynamics (CFD) simulations for rapid design optimization and prediction of a wide range of fluid flows. Although the overall success of the above data-based modeling strategies is inarguable, key limitations prohibit their extension into practical flow configurations. Many realistic fluid flow applications are described by complex geometries (propulsion devices with various injection schemes, airfoil geometries, wind turbines, etc.) -- key requirements for data-based ROM development in such scenarios are ensuring compatibility with flowfields stored on unstructured grids, and accounting for the fact that variations in geometric configurations should not require additional highly expensive training stages. Another limitation is model interpretability -- in many studies, data-based ROM tools are essentially black boxes, rendering latent spaces uninterpretable. As such, it is necessary to adopt new data-based frameworks that (a) easily interface with unstructured grid data, (b) are not restricted to single geometric configurations after training, and (c) are physically interpretable. This work addresses these points through the development of a novel graph neural network (GNN) autoencoder based on the message passing paradigm. More specifically, a baseline graph U-net is modified for autoencoding, where the source of compression comes from the combination of multi-scale message passing layers and an adaptive Top-K pooling strategy. In the context of fluid flow compression, the pooling strategy is shown to recover highly interpretable latent spaces tailored to the regression task at hand, revealing a novel pathway for graph-based surrogate modeling. More specifically, the autoencoder produces a latent graph that can be visualized in physical space directly, identifies coherent structures in the domain, and is described by an adjacency matrix that adapts in time with the evolution of the flow. The GNN architecture used in the autoencoding application is then extended to a forecasting setting, where the it is designed to recover volumetric source terms of input fields at the nodes instead of the identity map, extending previous work in GNN-based fluid flow forecasting. GNN evaluations are demonstrated on unsteady turbulent flows in a backward facing step configuration at high Reynolds numbers.