Physics-Informed Deep Learning of Incompressible Fluid Dynamics
Fast and stable fluid simulations are an essential prerequisite for applications ranging from computer generated imagery to computer aided design in research and development. Solving the partial differential equations governed by the dynamics of incompressible fluids, however, is a challenging task and traditional numerical approximation schemes come at high computational costs. Recent deep learning based approaches promise vast speed-ups but either do not generalize to new domain geometries or rely on complex pipelines that outsource major parts of the fluid simulation to traditional methods. In this work, we propose a novel unsupervised training framework that allows powerful convolutional neural networks to learn the entire update step of mapping a fluid state from time-point t to a subsequent state at time t+dt. For this purpose, we introduce a physics informed loss function that penalizes residuals of the incompressible Navier Stokes equations on a staggered grid. This greatly simplifies the pipeline to train and evaluate neural fluid models. After training, the framework yields models that are capable of fast fluid simulations and can handle various fluid phenomena including the Magnus effect and Kármán vortex streets. We present an interactive real-time demo and show that trained models can generalize to new domain geometries unseen during training. Moreover, the trained neural networks offer a differentiable update step to advance the fluid simulation in time and, thus, can be used as efficient differentiable fluid solvers. This can be exploited for optimal control tasks as we demonstrate in a proof-of-concept experiment. Our models significantly outperform a recent differentiable fluid solver in terms of computational speed and accuracy.
PDF Abstract