no code implementations • 22 Mar 2024 • Gilles Blanchard, Jean-Baptiste Fermanian, Hannah Marienwald
We endeavour to estimate numerous multi-dimensional means of various probability distributions on a common space based on independent samples.
1 code implementation • 27 Oct 2023 • Ulysse Gazin, Gilles Blanchard, Etienne Roquain
Conformal inference is a fundamental and versatile tool that provides distribution-free guarantees for many machine learning tasks.
no code implementations • 7 Jun 2023 • Bastien Dussap, Gilles Blanchard, Badr-Eddine Chérief-Abdellatif
Quantification learning deals with the task of estimating the target label distribution under label shift.
no code implementations • 5 Jun 2023 • El Mehdi Saad, Gilles Blanchard, Nicolas Verzelen
This framework allows the learner to estimate the covariance among the arms distributions, enabling a more efficient identification of the best arm.
1 code implementation • 15 Mar 2023 • Olympio Hacquard, Gilles Blanchard, Clément Levrard
We consider a binary supervised learning classification problem where instead of having data in a finite-dimensional Euclidean space, we observe measures on a compact space $\mathcal{X}$.
no code implementations • NeurIPS 2021 • El Mehdi Saad, Gilles Blanchard
We investigate the problem of minimizing the excess generalization error with respect to the best expert prediction in a finite family in the stochastic setting, under limited access to information.
no code implementations • 26 Oct 2021 • Olympio Hacquard, Krishnakumar Balasubramanian, Gilles Blanchard, Clément Levrard, Wolfgang Polonik
We study a regression problem on a compact manifold M. In order to take advantage of the underlying geometry and topology of the data, the regression task is performed on the basis of the first several eigenfunctions of the Laplace-Beltrami operator of the manifold, that are regularized with topological penalties.
no code implementations • 29 Sep 2021 • Tristan Mary-Huard, Vittorio Perduca, Gilles Blanchard, Martin-Magniette Marie-Laure
In the context of finite mixture models one considers the problem of classifying as many observations as possible in the classes of interest while controlling the classification error rate in these same classes.
no code implementations • 1 Sep 2021 • Gilles Blanchard, Jean-Baptiste Fermanian
A particular attention is given to the dependence in the pseudo-dimension $d_*$ of the distribution, defined as $d_* := \|\Sigma\|_2^2/\|\Sigma\|_\infty^2$.
no code implementations • 22 Nov 2020 • El Mehdi Saad, Gilles Blanchard, Sylvain Arlot
Greedy algorithms for feature selection are widely used for recovering sparse high-dimensional vectors in linear models.
no code implementations • 13 Nov 2020 • Hannah Marienwald, Jean-Baptiste Fermanian, Gilles Blanchard
We propose an improved estimator for the multi-task averaging problem, whose goal is the joint estimation of the means of multiple distributions using separate, independent data sets.
no code implementations • 17 Apr 2020 • Rémi Gribonval, Gilles Blanchard, Nicolas Keriven, Yann Traonmilin
We provide statistical learning guarantees for two unsupervised learning tasks in the context of compressive statistical learning, a general framework for resource-efficient large-scale learning that we introduced in a companion paper. The principle of compressive statistical learning is to compress a training collection, in one pass, into a low-dimensional sketch (a vector of random empirical generalized moments) that captures the information relevant to the considered learning task.
no code implementations • 6 Jul 2019 • Franziska Göbel, Gilles Blanchard
The aim of this paper is to establish two fundamental measure-metric properties of particular random geometric graphs.
no code implementations • 29 Jun 2019 • Leonidas Lefakis, Oleksandr Zadorozhnyi, Gilles Blanchard
We present a detailed analysis of the class of regression decision tree algorithms which employ a regulized piecewise-linear node-splitting criterion and have regularized linear models at the leaves.
no code implementations • 25 Jun 2019 • Oleksandr Zadorozhnyi, Gilles Blanchard, Alexandra Carpentier
The analysis of slow mixing scenario is supported with a minmax lower bound, which (up to a $\log(T)$ factor) matches the obtained upper bound.
no code implementations • 14 Feb 2019 • Abhishake Rastogi, Gilles Blanchard, Peter Mathé
We study a non-linear statistical inverse learning problem, where we observe the noisy image of a quantity through a non-linear operator at some random design points.
1 code implementation • 22 Oct 2018 • Juliette Achdou, Joseph C. Lam, Alexandra Carpentier, Gilles Blanchard
Rejection Sampling is a fundamental Monte-Carlo method.
no code implementations • 5 Dec 2017 • Gilles Blanchard, Oleksandr Zadorozhnyi
We obtain a Bernstein-type inequality for sums of Banach-valued random variables satisfying a weak dependence assumption of general type and under certain smoothness assumptions of the underlying Banach norm.
2 code implementations • 21 Nov 2017 • Gilles Blanchard, Aniket Anand Deshmukh, Urun Dogan, Gyemin Lee, Clayton Scott
In the problem of domain generalization (DG), there are labeled training data sets from several related prediction problems, and the goal is to make accurate predictions on future unlabeled data sets that are not known to the learner.
1 code implementation • 19 Oct 2017 • Gilles Blanchard, Marc Hoffmann, Markus Reiß
We consider truncated SVD (or spectral cut-off, projection) estimators for a prototypical statistical inverse problem in dimension $D$.
Statistics Theory Statistics Theory 65J20, 62G07
no code implementations • 30 Sep 2017 • Julian Katz-Samuels, Gilles Blanchard, Clayton Scott
Many machine learning problems can be characterized by mutual contamination models.
no code implementations • 22 Jun 2017 • Rémi Gribonval, Gilles Blanchard, Nicolas Keriven, Yann Traonmilin
We describe a general framework -- compressive statistical learning -- for resource-efficient large-scale learning: the training collection is compressed in one pass into a low-dimensional sketch (a vector of random empirical generalized moments) that captures the information relevant to the considered learning task.
no code implementations • 12 Nov 2016 • Gilles Blanchard, Nicole Mücke
These questions have been considered in past literature, but only under specific assumptions about the decay, typically polynomial, of the spectrum of the the kernel mapping covariance operator.
no code implementations • 24 Oct 2016 • Gilles Blanchard, Nicole Mücke
We consider a distributed learning approach in supervised learning for a large class of spectral regularization methods in an RKHS framework.
no code implementations • 8 Jul 2016 • Gilles Blanchard, Nicole Krämer
We prove statistical rates of convergence for kernel-based least squares regression from i. i. d.
1 code implementation • 24 Jun 2016 • Gilles Blanchard, Marc Hoffmann, Markus Reiß
For linear inverse problems $Y=\mathsf{A}\mu+\xi$, it is classical to recover the unknown signal $\mu$ by iterative regularisation methods $(\widehat \mu^{(m)}, m=0, 1,\ldots)$ and halt at a data-dependent iteration $\tau$ using some stopping rule, typically based on a discrepancy principle, so that the weak (or prediction) squared-error $\|\mathsf{A}(\widehat \mu^{(\tau)}-\mu)\|^2$ is controlled.
Statistics Theory Statistics Theory 65J20, 62G07
no code implementations • 14 Apr 2016 • Gilles Blanchard, Nicole Mücke
We consider a statistical inverse learning problem, where we observe the image of a function $f$ through a linear operator $A$ at i. i. d.
no code implementations • 12 May 2015 • Ilya Tolstikhin, Nikita Zhivotovskiy, Gilles Blanchard
This paper introduces a new complexity measure for transductive learning called Permutational Rademacher Complexity (PRC) and studies its properties.
no code implementations • 26 Nov 2014 • Ilya Tolstikhin, Gilles Blanchard, Marius Kloft
We show two novel concentration inequalities for suprema of empirical processes when sampling without replacement, which both take the variance of the functions into account.
no code implementations • 18 Jul 2014 • Andre Beinrucker, Ürün Dogan, Gilles Blanchard
We introduce extensions of stability selection, a method to stabilise variable selection methods introduced by Meinshausen and B\"uhlmann (J R Stat Soc 72:417-473, 2010).
no code implementations • 5 Mar 2013 • Gilles Blanchard, Marek Flaska, Gregory Handy, Sara Pozzi, Clayton Scott
For any label noise problem, there is a unique pair of true class-conditional distributions satisfying the proposed conditions, and we argue that this pair corresponds in a certain sense to maximal denoising of the observed distributions.
no code implementations • NeurIPS 2011 • Gilles Blanchard, Gyemin Lee, Clayton Scott
We develop a distribution-free, kernel-based approach to the problem.
no code implementations • NeurIPS 2011 • Marius Kloft, Gilles Blanchard
We derive an upper bound on the local Rademacher complexity of Lp-norm multiple kernel learning, which yields a tighter excess risk bound than global approaches.
no code implementations • NeurIPS 2010 • Gilles Blanchard, Nicole Krämer
Lower bounds on attainable rates depending on these two quantities were established in earlier literature, and we obtain upper bounds for the considered method that match these lower bounds (up to a log factor) if the true regression function belongs to the reproducing kernel Hilbert space.