1 code implementation • 13 Feb 2023 • Pierre Humbert, Batiste Le Bars, Aurélien Bellet, Sylvain Arlot
In this paper, we introduce a conformal prediction method to construct prediction sets in a oneshot federated learning setting.
no code implementations • 29 May 2022 • Binh T. Nguyen, Bertrand Thirion, Sylvain Arlot
Identifying the relevant variables for a classification model with correct confidence levels is a central but difficult task in high-dimension.
no code implementations • 22 Nov 2020 • El Mehdi Saad, Gilles Blanchard, Sylvain Arlot
Greedy algorithms for feature selection are widely used for recovering sparse high-dimensional vectors in linear models.
2 code implementations • ICML 2020 • Tuan-Binh Nguyen, Jérôme-Alexis Chevalier, Bertrand Thirion, Sylvain Arlot
We develop an extension of the Knockoff Inference procedure, introduced by Barber and Candes (2015).
no code implementations • 30 Sep 2019 • Sylvain Arlot
This text is the rejoinder following the discussion of a survey paper about minimal penalties and the slope heuristics (Arlot, 2019.
no code implementations • 11 Sep 2019 • Guillaume Maillard, Sylvain Arlot, Matthieu Lerasle
Aggregated hold-out (Agghoo) is a method which averages learning rules selected by hold-out (that is, cross-validation with a single split).
no code implementations • 22 Jan 2019 • Sylvain Arlot
Explicit connections are made with residual-variance estimators-with an original contribution on this topic, showing that for this task the slope heuristics performs almost as well as a residual-based estimator with the best model choice-and some classical algorithms such as L-curve or elbow heuristics, Mallows' C p , and Akaike's FPE.
no code implementations • 9 Mar 2017 • Sylvain Arlot
This text is a survey on cross-validation.
no code implementations • 6 Apr 2016 • Sylvain Arlot, Robin Genuer
This paper is a comment on the survey paper by Biau and Scornet (2016) about random forests.
no code implementations • 5 Jun 2015 • Rémi Lajugie, Piotr Bojanowski, Sylvain Arlot, Francis Bach
In this paper, we address the problem of multi-label classification.
no code implementations • NeurIPS 2014 • Damien Garreau, Rémi Lajugie, Sylvain Arlot, Francis Bach
The learning examples for this task are time series for which the true alignment is known.
no code implementations • 15 Jul 2014 • Sylvain Arlot, Robin Genuer
Under some regularity assumptions on the regression function, we show that the bias of an infinite forest decreases at a faster rate (with respect to the size of each tree) than a single tree.
no code implementations • 22 Oct 2012 • Sylvain Arlot, Matthieu Lerasle
Then, we compute the variance of V-fold cross-validation and related criteria, as well as the variance of key quantities for model selection performance.
no code implementations • NeurIPS 2009 • Sylvain Arlot, Francis R. Bach
This paper tackles the problem of selecting among several linear estimators in non-parametric regression; this includes model selection for linear regression, the choice of a regularization parameter in kernel ridge regression or spline smoothing, and the choice of a kernel in multiple kernel learning.