2 code implementations • 31 Oct 2023 • Meraj Hashemizadeh, Juan Ramirez, Rohan Sukumaran, Golnoosh Farnadi, Simon Lacoste-Julien, Jose Gallego-Posada
Model pruning is a popular approach to enable the deployment of large deep learning models on edge devices with restricted computational or storage capacities.
1 code implementation • 12 Sep 2023 • Hao-Jun Michael Shi, Tsung-Hsien Lee, Shintaro Iwasaki, Jose Gallego-Posada, Zhijing Li, Kaushik Rangadurai, Dheevatsa Mudigere, Michael Rabbat
It constructs a block-diagonal preconditioner where each block consists of a coarse Kronecker product approximation to full-matrix AdaGrad for each parameter of the neural network.
1 code implementation • 8 Aug 2022 • Jose Gallego-Posada, Juan Ramirez, Akram Erraqabi, Yoshua Bengio, Simon Lacoste-Julien
The performance of trained neural networks is robust to harsh levels of pruning.
1 code implementation • 8 Jul 2022 • Juan Ramirez, Jose Gallego-Posada
Advances in Implicit Neural Representations (INR) have motivated research on domain-agnostic compression techniques.
1 code implementation • 21 May 2022 • Sourya Basu, Jose Gallego-Posada, Francesco Viganò, James Rowbottom, Taco Cohen
Equivariance to symmetries has proven to be a powerful inductive bias in deep learning research.
no code implementations • NeurIPS Workshop LatinX_in_AI 2021 • Jose Gallego-Posada, Juan Ramirez De Los Rios, Akram Erraqabi
We propose to approach the problem of learning $L_0$-sparse networks using a constrained formulation of the optimization problem.
1 code implementation • ICLR Workshop GTRL 2021 • Jose Gallego-Posada, Patrick Forré
Inspired by the fuzzy topological representation of a dataset employed in UMAP (McInnes et al., 2018), we propose a regularization principle for supervised learning based on the preservation of the simplicial complex structure of the data.
no code implementations • 2 Dec 2017 • Frans A. Oliehoek, Rahul Savani, Jose Gallego-Posada, Elise van der Pol, Edwin D. de Jong, Roderich Gross
We introduce Generative Adversarial Network Games (GANGs), which explicitly model a finite zero-sum game between a generator ($G$) and classifier ($C$) that use mixed strategies.