no code implementations • 13 Mar 2024 • Louis Fournier, Edouard Oyallon
Training large deep learning models requires parallelization techniques to scale.
no code implementations • 15 Dec 2023 • Léo Grinsztajn, Edouard Oyallon, Myung Jun Kim, Gaël Varoquaux
We study the benefits of language models in 14 analytical tasks on tables while varying the training size, as well as for a fuzzy join benchmark.
1 code implementation • NeurIPS 2023 • Adel Nabli, Eugene Belilovsky, Edouard Oyallon
Distributed training of Deep Learning models has been critical to many recent successes in the field.
1 code implementation • 12 Jun 2023 • Louis Fournier, Stéphane Rivaud, Eugene Belilovsky, Michael Eickenberg, Edouard Oyallon
Forward Gradients - the idea of using directional derivatives in forward differentiation mode - have recently been shown to be utilizable for neural network training while avoiding problems generally associated with backpropagation gradient computation, such as locking and memorization requirements.
1 code implementation • NeurIPS 2022 • Leo Grinsztajn, Edouard Oyallon, Gael Varoquaux
While deep learning has enabled tremendous progress on text and image datasets, its superiority on tabular data is not clear.
1 code implementation • 26 Jul 2022 • Adel Nabli, Edouard Oyallon
This work introduces DADAO: the first decentralized, accelerated, asynchronous, primal, first-order algorithm to minimize a sum of $L$-smooth and $\mu$-strongly convex functions distributed over a given network of size $n$.
1 code implementation • 18 Jul 2022 • Léo Grinsztajn, Edouard Oyallon, Gaël Varoquaux
While deep learning has enabled tremendous progress on text and image datasets, its superiority on tabular data is not clear.
no code implementations • 6 Jul 2022 • Grégoire Sergeant-Perthuis, Jakob Maier, Joan Bruna, Edouard Oyallon
In the context of Neural Networks defined over $\mathcal{M}$, it indicates that point-wise non-linear operators are the only universal family that commutes with any group of symmetries, and justifies their systematic use in combination with dedicated linear operators commuting with specific symmetries.
no code implementations • 28 Jan 2022 • Irene Tenison, Sai Aravind Sreeramadas, Vaikkunth Mugunthan, Edouard Oyallon, Irina Rish, Eugene Belilovsky
A major challenge in federated learning is the heterogeneity of data across client, which can degrade the performance of standard FL algorithms.
no code implementations • 27 Jul 2021 • Othmane Laousy, Guillaume Chassagnon, Edouard Oyallon, Nikos Paragios, Marie-Pierre Revel, Maria Vakalopoulou
In this paper, we propose a deep reinforcement learning method for accurate localization of the L3 CT slice.
no code implementations • 11 Jun 2021 • Eugene Belilovsky, Louis Leconte, Lucas Caccia, Michael Eickenberg, Edouard Oyallon
With the use of a replay buffer we show that this approach can be extended to asynchronous settings, where modules can operate and continue to update with possibly large communication delays.
no code implementations • ICLR Workshop GTRL 2021 • Nathan Grinsztajn, Philippe Preux, Edouard Oyallon
In this work, we study the behavior of standard models for community detection under spectral manipulations.
no code implementations • 4 Jun 2021 • Nathan Grinsztajn, Louis Leconte, Philippe Preux, Edouard Oyallon
We present a new approach for learning unsupervised node representations in community graphs.
no code implementations • NeurIPS 2021 • Louis Leconte, Aymeric Dieuleveut, Edouard Oyallon, Eric Moulines, Gilles Pages
The growing size of models and datasets have made distributed implementation of stochastic gradient descent (SGD) an active field of research.
1 code implementation • 19 Jan 2021 • Louis Thiry, Michael Arbel, Eugene Belilovsky, Edouard Oyallon
A recent line of work showed that various forms of convolutional kernel methods can be competitive with standard supervised deep convolutional networks on datasets like CIFAR-10, obtaining accuracies in the range of 87-90% while being more amenable to theoretical analysis.
no code implementations • 1 Jan 2021 • Nathan Grinsztajn, Philippe Preux, Edouard Oyallon
In this work, we study the behavior of standard GCNs under spectral manipulations.
no code implementations • ICLR 2021 • Louis Thiry, Michael Arbel, Eugene Belilovsky, Edouard Oyallon
A recent line of work showed that various forms of convolutional kernel methods can be competitive with standard supervised deep convolutional networks on datasets like CIFAR-10, obtaining accuracies in the range of 87-90% while being more amenable to theoretical analysis.
1 code implementation • ICML 2020 • Edouard Oyallon
We propose the Interferometric Graph Transform (IGT), which is a new class of deep unsupervised graph convolutional neural network for building graph representations.
2 code implementations • ICML 2020 • Eugene Belilovsky, Michael Eickenberg, Edouard Oyallon
It is based on a greedy relaxation of the joint training objective, recently shown to be effective in the context of Convolutional Neural Networks (CNNs) on large-scale image classification.
1 code implementation • 29 Dec 2018 • Eugene Belilovsky, Michael Eickenberg, Edouard Oyallon
Here we use 1-hidden layer learning problems to sequentially build deep networks layer by layer, which can inherit properties from shallow networks.
2 code implementations • 28 Dec 2018 • Mathieu Andreux, Tomás Angles, Georgios Exarchakis, Roberto Leonarduzzi, Gaspar Rochette, Louis Thiry, John Zarka, Stéphane Mallat, Joakim andén, Eugene Belilovsky, Joan Bruna, Vincent Lostanlen, Muawiz Chaudhary, Matthew J. Hirn, Edouard Oyallon, Sixin Zhang, Carmine Cella, Michael Eickenberg
The wavelet scattering transform is an invariant signal representation suitable for many signal processing and machine learning applications.
1 code implementation • NeurIPS 2019 • Lenaic Chizat, Edouard Oyallon, Francis Bach
In a series of recent theoretical works, it was shown that strongly over-parameterized neural networks trained with gradient-based methods could converge exponentially fast to zero training loss, with their parameters hardly varying.
no code implementations • 27 Sep 2018 • Eugene Belilovsky, Michael Eickenberg, Edouard Oyallon
Here we use 1-hidden layer learning problems to sequentially build deep networks layer by layer, which can inherit properties from shallow networks.
1 code implementation • ECCV 2018 • Edouard Oyallon, Eugene Belilovsky, Sergey Zagoruyko, Michal Valko
We study the first-order scattering transform as a candidate for reducing the signal processed by a convolutional neural network (CNN).
1 code implementation • 17 Sep 2018 • Edouard Oyallon, Sergey Zagoruyko, Gabriel Huang, Nikos Komodakis, Simon Lacoste-Julien, Matthew Blaschko, Eugene Belilovsky
In particular, by working in scattering space, we achieve competitive results both for supervised and unsupervised learning tasks, while making progress towards constructing more interpretable CNNs.
1 code implementation • 1 Jun 2018 • Damien Scieur, Edouard Oyallon, Alexandre d'Aspremont, Francis Bach
The Regularized Nonlinear Acceleration (RNA) algorithm is an acceleration method capable of improving the rate of convergence of many optimization schemes such as gradient descend, SAGA or SVRG.
no code implementations • 24 May 2018 • Damien Scieur, Edouard Oyallon, Alexandre d'Aspremont, Francis Bach
Regularized nonlinear acceleration (RNA) estimates the minimum of a function by post-processing iterates from an algorithm such as the gradient method.
2 code implementations • ICLR 2018 • Jörn-Henrik Jacobsen, Arnold Smeulders, Edouard Oyallon
An analysis of i-RevNets learned representations suggests an alternative explanation for the success of deep networks by a progressive contraction and linear separation with depth.
2 code implementations • ICCV 2017 • Edouard Oyallon, Eugene Belilovsky, Sergey Zagoruyko
Combining scattering networks with a modern ResNet, we achieve a single-crop top 5 error of 11. 4% on imagenet ILSVRC2012, comparable to the Resnet-18 architecture, while utilizing only 10 layers.
Ranked #73 on Image Classification on STL-10
no code implementations • 12 Mar 2017 • Jörn-Henrik Jacobsen, Edouard Oyallon, Stéphane Mallat, Arnold W. M. Smeulders
Multiscale hierarchical convolutional networks are structured deep convolutional networks where layers are indexed by progressively higher dimensional attributes, which are learned from training data.
1 code implementation • CVPR 2017 • Edouard Oyallon
We show that increasing the width of our network permits being competitive with very deep networks.
1 code implementation • CVPR 2015 • Edouard Oyallon, Stéphane Mallat
Dictionary learning algorithms or supervised deep convolution networks have considerably improved the efficiency of predefined feature representations such as SIFT.
1 code implementation • 20 Dec 2013 • Edouard Oyallon, Stéphane Mallat, Laurent SIfre
We introduce a two-layer wavelet scattering network, for object classification.