1 code implementation • 22 May 2024 • Sifan Wang, Jacob H Seidman, Shyam Sankaran, Hanwen Wang, George J. Pappas, Paris Perdikaris
Our contributions can be viewed as a first step towards adapting advanced computer vision architectures for building more flexible and accurate machine learning models in physical sciences.
no code implementations • 5 Mar 2024 • Hanwen Wang, Theinmozhi Arulraj, Alberto Ippolito, Aleksander S. Popel
Virtual patients and digital patients/twins are two similar concepts gaining increasing attention in health care with goals to accelerate drug development and improve patients' survival, but with their own limitations.
no code implementations • 25 Aug 2023 • Hanwen Wang, Yu Qi, Lin Yao, Yueming Wang, Dario Farina, Gang Pan
Then a human-machine joint learning framework is proposed: 1) for the human side, we model the learning process in a sequential trial-and-error scenario and propose a novel ``copy/new'' feedback paradigm to help shape the signal generation of the subject toward the optimal distribution; 2) for the machine side, we propose a novel adaptive learning algorithm to learn an optimal signal distribution along with the subject's learning process.
1 code implementation • 16 Aug 2023 • Sifan Wang, Shyam Sankaran, Hanwen Wang, Paris Perdikaris
Physics-informed neural networks (PINNs) have been popularized as a deep learning framework that can seamlessly synthesize observational data and partial differential equation (PDE) constraints.
1 code implementation • 3 Oct 2022 • Sifan Wang, Hanwen Wang, Jacob H. Seidman, Paris Perdikaris
Continuous neural representations have recently emerged as a powerful and flexible alternative to classical discretized representations of signals.
1 code implementation • 4 Oct 2021 • Sifan Wang, Hanwen Wang, Paris Perdikaris
In this work we analyze the training dynamics of deep operator networks (DeepONets) through the lens of Neural Tangent Kernel (NTK) theory, and reveal a bias that favors the approximation of functions with larger magnitudes.
no code implementations • NeurIPS Workshop DLDE 2021 • Hanwen Wang, Isabelle Crawford-Eng, Paris Perdikaris
Multilayer Perceptrons (MLPs) defines a fundamental model class that forms the backbone of many modern deep learning architectures.
2 code implementations • 19 Mar 2021 • Sifan Wang, Hanwen Wang, Paris Perdikaris
Deep operator networks (DeepONets) are receiving increased attention thanks to their demonstrated capability to approximate nonlinear operators between infinite-dimensional Banach spaces.
1 code implementation • 18 Dec 2020 • Sifan Wang, Hanwen Wang, Paris Perdikaris
Physics-informed neural networks (PINNs) are demonstrating remarkable promise in integrating physical models with gappy and noisy observational data, but they still struggle in cases where the target functions to be approximated exhibit high-frequency or multi-scale features.