no code implementations • 4 Apr 2024 • Ben Adcock, Simone Brugiapaglia, Nick Dexter, Sebastian Moraga
For the latter, there is currently a significant gap between the approximation theory of DNNs and the practical performance of deep learning.
no code implementations • 25 Nov 2023 • Ben Adcock, Juan M. Cardenas, Nick Dexter
In summary, our work not only introduces a unified way to study learning unknown objects from general types of data, but also establishes a series of general theoretical guarantees which consolidate and improve various known results.
no code implementations • NeurIPS 2023 • Ben Adcock, Juan M. Cardenas, Nick Dexter
Our framework extends the standard setup by allowing for general types of data, rather than merely pointwise samples of the target function.
1 code implementation • 25 Aug 2022 • Ben Adcock, Juan M. Cardenas, Nick Dexter
In this work, we propose an adaptive sampling strategy, CAS4DL (Christoffel Adaptive Sampling for Deep Learning) to increase the sample efficiency of DL for multivariate function approximation.
no code implementations • 25 Mar 2022 • Ben Adcock, Simone Brugiapaglia, Nick Dexter, Sebastian Moraga
On the one hand, there is a well-developed theory of best $s$-term polynomial approximation, which asserts exponential or algebraic rates of convergence for holomorphic functions.
no code implementations • 11 Dec 2020 • Ben Adcock, Simone Brugiapaglia, Nick Dexter, Sebastian Moraga
Such problems are challenging: 1) pointwise samples are expensive to acquire, 2) the function domain is high dimensional, and 3) the range lies in a Hilbert space.
1 code implementation • 16 Jan 2020 • Ben Adcock, Nick Dexter
Our main conclusion from these experiments is that there is a crucial gap between the approximation theory of DNNs and their practical performance, with trained DNNs performing relatively poorly on functions for which there are strong approximation results (e. g. smooth functions), yet performing well in comparison to best-in-class methods for other functions.