no code implementations • 13 Mar 2024 • Francesca Bartolucci, Ernesto de Vito, Lorenzo Rosasco, Stefano Vigogna
Studying the function spaces defined by neural networks helps to understand the corresponding learning models and their inductive bias.
1 code implementation • 22 Nov 2023 • Antoine Chatalic, Nicolas Schreuder, Ernesto de Vito, Lorenzo Rosasco
In this work we consider the problem of numerical integration, i. e., approximating integrals with respect to a target probability measure using only pointwise evaluations of the integrand.
no code implementations • 4 Dec 2022 • Andrea Della Vecchia, Ernesto de Vito, Lorenzo Rosasco
We study a natural extension of classical empirical risk minimization, where the hypothesis space is a random subspace of a given space.
no code implementations • 3 Feb 2022 • Stefano Vigogna, Giacomo Meanti, Ernesto de Vito, Lorenzo Rosasco
We study the behavior of error bounds for multiclass classification under suitable margin conditions.
1 code implementation • 17 Jan 2022 • Giacomo Meanti, Luigi Carratino, Ernesto de Vito, Lorenzo Rosasco
Our analysis shows the benefit of the proposed approach, that we hence incorporate in a library for large scale kernel methods to derive adaptively tuned solutions.
1 code implementation • 21 Oct 2021 • Antoine Chatalic, Luigi Carratino, Ernesto de Vito, Lorenzo Rosasco
Compressive learning is an approach to efficient large scale learning based on sketching an entire dataset to a single mean embedding (the sketch), i. e. a vector of generalized moments.
no code implementations • 20 Sep 2021 • Francesca Bartolucci, Ernesto de Vito, Lorenzo Rosasco, Stefano Vigogna
Characterizing the function spaces corresponding to neural networks can provide a way to understand their properties.
1 code implementation • NeurIPS 2021 • Giovanni S. Alberti, Ernesto de Vito, Matti Lassas, Luca Ratti, Matteo Santacesaria
Then, we consider the problem of learning the regularizer from a finite training set in two different frameworks: one supervised, based on samples of both $x$ and $y$, and one unsupervised, based only on samples of $x$.
no code implementations • 17 Jun 2020 • Nicolò Pagliana, Alessandro Rudi, Ernesto De Vito, Lorenzo Rosasco
We study the learning properties of nonparametric ridge-less least squares.
no code implementations • 17 Jun 2020 • Andrea Della Vecchia, Jaouad Mourtada, Ernesto de Vito, Lorenzo Rosasco
We study a natural extension of classical empirical risk minimization, where the hypothesis space is a random subspace of a given space.
no code implementations • 8 Jul 2019 • Enrico Cecini, Ernesto de Vito, Lorenzo Rosasco
Our main technical contribution is an analysis of the expected distortion achieved by the proposed algorithm, when the data are assumed to be sampled from a fixed unknown distribution.
no code implementations • 27 May 2019 • Ernesto De Vito, Nicole Mücke, Lorenzo Rosasco
We study reproducing kernel Hilbert spaces (RKHS) on a Riemannian manifold.
no code implementations • 23 Sep 2018 • Ernesto de Vito, Zeljko Kereta, Valeria Naumova
Our analysis combines statistical learning theory with insights from regularisation theory.
no code implementations • 26 Jul 2016 • Miguel A. Duval-Poo, Nicoletta Noceti, Francesca Odone, Ernesto de Vito
We derive a measure which is very effective for blob detection and closely related to the Laplacian of Gaussian.
no code implementations • 16 Apr 2012 • Ernesto De Vito, Lorenzo Rosasco, Alessandro Toigo
We consider the problem of learning a set from random samples.