no code implementations • 27 Sep 2022 • Ruben Villarreal, Nikolaos N. Vlassis, Nhon N. Phan, Tommie A. Catanach, Reese E. Jones, Nathaniel A. Trask, Sharlotte L. B. Kramer, WaiChing Sun
This new data leads to a Bayesian update of the parameters by the KF, which is used to enhance the state representation.
no code implementations • NeurIPS 2021 • Kookjin Lee, Nathaniel A. Trask, Panos Stinis
Forecasting of time-series data requires imposition of inductive biases to obtain predictive extrapolation, and recent works have imposed Hamiltonian/Lagrangian form to preserve structure for systems with reversible dynamics.
no code implementations • 27 Jan 2021 • Kookjin Lee, Nathaniel A. Trask, Ravi G. Patel, Mamikon A. Gulian, Eric C. Cyr
Approximation theorists have established best-in-class optimal approximation rates of deep neural networks by utilizing their ability to simultaneously emulate partitions of unity and monomials.
1 code implementation • 25 Sep 2020 • Ravi G. Patel, Nathaniel A. Trask, Mitchell A. Wood, Eric C. Cyr
The application of deep learning toward discovery of data-driven models requires careful application of inductive biases to obtain a description of physics which is both accurate and robust.
no code implementations • 17 Jun 2020 • Ravi G. Patel, Nathaniel A. Trask, Mamikon A. Gulian, Eric C. Cyr
By alternating between a second-order method to find globally optimal parameters for the linear layer and gradient descent to train the hidden layers, we ensure an optimal fit of the adaptive basis to data throughout training.
no code implementations • 10 Dec 2019 • Eric C. Cyr, Mamikon A. Gulian, Ravi G. Patel, Mauro Perego, Nathaniel A. Trask
Motivated by the gap between theoretical optimal approximation rates of deep neural networks (DNNs) and the accuracy realized in practice, we seek to improve the training of DNNs.