no code implementations • NeurIPS 2021 • Alberto Bietti, Luca Venturi, Joan Bruna
Many supervised learning problems involve high-dimensional data such as images, text, or graphs.
no code implementations • 14 Jun 2021 • Alberto Bietti, Luca Venturi, Joan Bruna
Many supervised learning problems involve high-dimensional data such as images, text, or graphs.
no code implementations • 5 Apr 2021 • Terrence Alsup, Luca Venturi, Benjamin Peherstorfer
The proposed multilevel Stein variational gradient descent moves most of the iterations to lower, cheaper levels with the aim of requiring only a few iterations on the higher, more expensive levels when compared to the traditional, single-level Stein variational gradient descent variant that uses the highest-level distribution only.
no code implementations • 2 Feb 2021 • Luca Venturi, Samy Jelassi, Tristan Ozuch, Joan Bruna
The first contribution of this paper is to extend such results to a more general class of functions, namely functions with piece-wise oscillatory structure, by building on the proof strategy of (Eldan and Shamir, 2016).
no code implementations • 28 Jul 2020 • Donsub Rim, Luca Venturi, Joan Bruna, Benjamin Peherstorfer
Classical reduced models are low-rank approximations using a fixed basis designed to achieve dimensionality reduction of large-scale systems.
no code implementations • 23 Mar 2020 • Dan Kushnir, Luca Venturi
To alleviate the load of data annotation, active deep learning aims to select a minimal set of training points to be labelled which yields maximal model accuracy.
no code implementations • 18 Feb 2018 • Luca Venturi, Afonso S. Bandeira, Joan Bruna
Focusing on a class of two-layer neural networks defined by smooth (but generally non-linear) activation functions, we identify a notion of intrinsic dimension and show that it provides necessary and sufficient conditions for the absence of spurious valleys.
no code implementations • 22 Dec 2017 • Chen Li, Luca Venturi, Ruitu Xu
We investigate a series of learning kernel problems with polynomial combinations of base kernels, which will help us solve regression and classification problems.