Search Results for author: Elvis Dohmatob

Found 30 papers, 5 papers with code

Learning disconnected manifolds: a no GAN's land

no code implementations ICML 2020 Ugo Tanielian, Thibaut Issenhuth, Elvis Dohmatob, Jeremie Mary

Typical architectures of Generative Adversarial Networks make use of a unimodal latent/input distribution transformed by a continuous generator.

Model Collapse Demystified: The Case of Regression

no code implementations12 Feb 2024 Elvis Dohmatob, Yunzhen Feng, Julia Kempe

In the era of large language models like ChatGPT, the phenomenon of "model collapse" refers to the situation whereby as a model is trained recursively on data generated from previous generations of itself over time, its performance degrades until the model eventually becomes completely useless, i. e the model collapses.

regression

A Tale of Tails: Model Collapse as a Change of Scaling Laws

no code implementations10 Feb 2024 Elvis Dohmatob, Yunzhen Feng, Pu Yang, Francois Charton, Julia Kempe

We discover a wide range of decay phenomena, analyzing loss of scaling, shifted scaling with number of generations, the ''un-learning" of skills, and grokking when mixing human and synthesized data.

Language Modelling Large Language Model +1

Scaling Laws for Associative Memories

no code implementations4 Oct 2023 Vivien Cabannes, Elvis Dohmatob, Alberto Bietti

Learning arguably involves the discovery and memorization of abstract rules.

Memorization

Robust Linear Regression: Phase-Transitions and Precise Tradeoffs for General Norms

no code implementations1 Aug 2023 Elvis Dohmatob, Meyer Scetbon

In this paper, we investigate the impact of test-time adversarial attacks on linear regression models and determine the optimal level of robustness that any model can reach while maintaining a given level of standard predictive performance (accuracy).

Adversarial Robustness regression

Robust Linear Regression: Gradient-descent, Early-stopping, and Beyond

no code implementations31 Jan 2023 Meyer Scetbon, Elvis Dohmatob

However, we show that this strategy can be arbitrarily sub-optimal in the case of general Mahalanobis attacks.

regression

An Adversarial Robustness Perspective on the Topology of Neural Networks

1 code implementation4 Nov 2022 Morgane Goibert, Thomas Ricatte, Elvis Dohmatob

In this paper, we investigate the impact of neural networks (NNs) topology on adversarial robustness.

Adversarial Robustness

Contextual bandits with concave rewards, and an application to fair ranking

no code implementations18 Oct 2022 Virginie Do, Elvis Dohmatob, Matteo Pirotta, Alessandro Lazaric, Nicolas Usunier

We consider Contextual Bandits with Concave Rewards (CBCR), a multi-objective bandit problem where the desired trade-off between the rewards is defined by a known concave objective function, and the reward vector depends on an observed stochastic context.

Fairness Multi-Armed Bandits

Fast online ranking with fairness of exposure

no code implementations13 Sep 2022 Nicolas Usunier, Virginie Do, Elvis Dohmatob

In this paper, we propose the first efficient online algorithm to optimize concave objective functions in the space of rankings which applies to every concave and smooth objective function, such as the ones found for fairness of exposure.

Fairness Recommendation Systems

Scalable MCMC Sampling for Nonsymmetric Determinantal Point Processes

1 code implementation1 Jul 2022 Insu Han, Mike Gartrell, Elvis Dohmatob, Amin Karbasi

In this work, we develop a scalable MCMC sampling algorithm for $k$-NDPPs with low-rank kernels, thus enabling runtime that is sublinear in $n$.

Point Processes

Origins of Low-dimensional Adversarial Perturbations

no code implementations25 Mar 2022 Elvis Dohmatob, Chuan Guo, Morgane Goibert

Finally, we show that if a decision-region is compact, then it admits a universal adversarial perturbation with $L_2$ norm which is $\sqrt{d}$ times smaller than the typical $L_2$ norm of a data point.

On the (Non-)Robustness of Two-Layer Neural Networks in Different Learning Regimes

no code implementations22 Mar 2022 Elvis Dohmatob, Alberto Bietti

To better understand these factors, we provide a precise study of the adversarial robustness in different scenarios, from initialization to the end of training in different regimes, as well as intermediate scenarios, where initialization still plays a role due to "lazy" training.

Adversarial Robustness

Fundamental tradeoffs between memorization and robustness in random features and neural tangent regimes

no code implementations4 Jun 2021 Elvis Dohmatob

More precisely, if $n$ is the number of training examples, $d$ is the input dimension, and $k$ is the number of hidden neurons in a two-layer neural network, we prove for a large class of activation functions that, if the model memorizes even a fraction of the training, then its Sobolev-seminorm is lower-bounded by (i) $\sqrt{n}$ in case of infinite-width random features (RF) or neural tangent kernel (NTK) with $d \gtrsim n$; (ii) $\sqrt{n}$ in case of finite-width RF with proportionate scaling of $d$ and $k$; and (iii) $\sqrt{n/k}$ in case of finite-width NTK with proportionate scaling of $d$ and $k$.

Memorization

On the Convergence of Smooth Regularized Approximate Value Iteration Schemes

no code implementations NeurIPS 2020 Elena Smirnova, Elvis Dohmatob

Entropy regularization, smoothing of Q-values and neural network function approximator are key components of the state-of-the-art reinforcement learning (RL) algorithms, such as Soft Actor-Critic~\cite{haarnoja2018soft}.

reinforcement-learning Reinforcement Learning (RL)

Implicit bias of any algorithm: bounding bias via margin

no code implementations12 Nov 2020 Elvis Dohmatob

We measure the quality of such a hyperplane by its margin $\gamma(w)$, defined as minimum distance between any of the points $x_i$ and the hyperplane.

Classifier-independent Lower-Bounds for Adversarial Robustness

no code implementations17 Jun 2020 Elvis Dohmatob

(1) We use optimal transport theory to derive variational formulae for the Bayes-optimal error a classifier can make on a given classification problem, subject to adversarial attacks.

Adversarial Attack Adversarial Robustness +1

Scalable Learning and MAP Inference for Nonsymmetric Determinantal Point Processes

2 code implementations ICLR 2021 Mike Gartrell, Insu Han, Elvis Dohmatob, Jennifer Gillenwater, Victor-Emmanuel Brunel

Determinantal point processes (DPPs) have attracted significant attention in machine learning for their ability to model subsets drawn from a large item collection.

Point Processes

Learning disconnected manifolds: a no GANs land

no code implementations8 Jun 2020 Ugo Tanielian, Thibaut Issenhuth, Elvis Dohmatob, Jeremie Mary

Typical architectures of Generative AdversarialNetworks make use of a unimodal latent distribution transformed by a continuous generator.

On the Convergence of Approximate and Regularized Policy Iteration Schemes

no code implementations20 Sep 2019 Elena Smirnova, Elvis Dohmatob

Entropy regularized algorithms such as Soft Q-learning and Soft Actor-Critic, recently showed state-of-the-art performance on a number of challenging reinforcement learning (RL) tasks.

Q-Learning Reinforcement Learning (RL)

Adversarial Robustness via Label-Smoothing

no code implementations27 Jun 2019 Morgane Goibert, Elvis Dohmatob

We study Label-Smoothing as a means for improving adversarial robustness of supervised deep-learning models.

Adversarial Robustness

Distributionally Robust Counterfactual Risk Minimization

no code implementations14 Jun 2019 Louis Faury, Ugo Tanielian, Flavian vasile, Elena Smirnova, Elvis Dohmatob

This manuscript introduces the idea of using Distributionally Robust Optimization (DRO) for the Counterfactual Risk Minimization (CRM) problem.

counterfactual Decision Making

Learning Nonsymmetric Determinantal Point Processes

1 code implementation NeurIPS 2019 Mike Gartrell, Victor-Emmanuel Brunel, Elvis Dohmatob, Syrine Krichene

Our method imposes a particular decomposition of the nonsymmetric kernel that enables such tractable learning algorithms, which we analyze both theoretically and experimentally.

Information Retrieval Point Processes +2

Distributionally Robust Reinforcement Learning

no code implementations23 Feb 2019 Elena Smirnova, Elvis Dohmatob, Jérémie Mary

Our formulation results in a efficient algorithm that accounts for a simple re-weighting of policy actions in the standard policy iteration scheme.

Continuous Control Q-Learning +2

Deep Determinantal Point Processes

no code implementations17 Nov 2018 Mike Gartrell, Elvis Dohmatob, Jon Alberdi

While DPPs have substantial expressive power, they are fundamentally limited by the parameterization of the kernel matrix and their inability to capture nonlinear interactions between items within sets.

Point Processes

Generalized No Free Lunch Theorem for Adversarial Robustness

no code implementations8 Oct 2018 Elvis Dohmatob

This manuscript presents some new impossibility results on adversarial robustness in machine learning, a very important yet largely open problem.

Adversarial Robustness

Learning brain regions via large-scale online structured sparse dictionary learning

no code implementations NeurIPS 2016 Elvis Dohmatob, Arthur Mensch, Gael Varoquaux, Bertrand Thirion

We propose a multivariate online dictionary-learning method for obtaining decompositions of brain images with structured and sparse components (aka atoms).

Dictionary Learning

Continuation of Nesterov's Smoothing for Regression with Structured Sparsity in High-Dimensional Neuroimaging

no code implementations31 May 2016 Fouad Hadj-Selem, Tommy Lofstedt, Elvis Dohmatob, Vincent Frouin, Mathieu Dubois, Vincent Guillemot, Edouard Duchesnay

Nesterov's smoothing technique can be used to minimize a large number of non-smooth convex structured penalties but reasonable precision requires a small smoothing parameter, which slows down the convergence speed.

regression

FAASTA: A fast solver for total-variation regularization of ill-conditioned problems with application to brain imaging

no code implementations22 Dec 2015 Gaël Varoquaux, Michael Eickenberg, Elvis Dohmatob, Bertand Thirion

The total variation (TV) penalty, as many other analysis-sparsity problems, does not lead to separable factors or a proximal operatorwith a closed-form expression, such as soft thresholding for the $\ell\_1$ penalty.

Brain Decoding

Region segmentation for sparse decompositions: better brain parcellations from rest fMRI

no code implementations12 Dec 2014 Alexandre Abraham, Elvis Dohmatob, Bertrand Thirion, Dimitris Samaras, Gael Varoquaux

Functional Magnetic Resonance Images acquired during resting-state provide information about the functional organization of the brain through measuring correlations between brain areas.

Cannot find the paper you are looking for? You can Submit a new open access paper.