1 code implementation • 19 Dec 2023 • Alonso Urbano, David W. Romero
Group equivariance ensures consistent responses to group transformations of the input, leading to more robust models and enhanced generalization capabilities.
no code implementations • 22 Jul 2023 • Putri A. van der Linden, David W. Romero, Erik J. Bekkers
As a result, operations that rely on neighborhood information scale much worse for point clouds than for grid data, specially for large inputs and large neighborhoods.
no code implementations • 10 Feb 2023 • David W. Romero, Neil Zeghidour
We present Differentiable Neural Architectures (DNArch), a method that jointly learns the weights and the architecture of Convolutional Neural Networks (CNNs) by backpropagation.
1 code implementation • 25 Jan 2023 • David M. Knigge, David W. Romero, Albert Gu, Efstratios Gavves, Erik J. Bekkers, Jakub M. Tomczak, Mark Hoogendoorn, Jan-Jakob Sonke
Performant Convolutional Neural Network (CNN) architectures must be tailored to specific tasks in order to consider the length, resolution, and dimensionality of the input data.
1 code implementation • 7 Jun 2022 • David W. Romero, David M. Knigge, Albert Gu, Erik J. Bekkers, Efstratios Gavves, Jakub M. Tomczak, Mark Hoogendoorn
The use of Convolutional Neural Networks (CNNs) is widespread in Deep Learning due to a range of desirable model properties which result in an efficient and effective machine learning framework.
no code implementations • 14 Apr 2022 • Tycho F. A. van der Ouderaa, David W. Romero, Mark van der Wilk
Equivariances provide useful inductive biases in neural network modeling, with the translation equivariance of convolutional neural networks being a canonical example.
1 code implementation • 25 Oct 2021 • David M. Knigge, David W. Romero, Erik J. Bekkers
In addition, thanks to the increase in computational efficiency, we are able to implement G-CNNs equivariant to the $\mathrm{Sim(2)}$ group; the group of dilations, rotations and translations.
Ranked #1 on Rotated MNIST on Rotated MNIST
1 code implementation • 19 Oct 2021 • David W. Romero, Suhas Lohit
Frequently, transformations occurring in data can be better represented by a subset of a group than by a group as a whole, e. g., rotations in $[-90^{\circ}, 90^{\circ}]$.
1 code implementation • ICLR 2022 • David W. Romero, Robert-Jan Bruintjes, Jakub M. Tomczak, Erik J. Bekkers, Mark Hoogendoorn, Jan C. van Gemert
In this work, we propose FlexConv, a novel convolutional operation with which high bandwidth convolutional kernels of learnable kernel size can be learned at a fixed parameter cost.
1 code implementation • ICLR 2022 • David W. Romero, Anna Kuzina, Erik J. Bekkers, Jakub M. Tomczak, Mark Hoogendoorn
Convolutional networks are unable to handle sequences of unknown size and their memory horizon must be defined a priori.
Ranked #5 on Sequential Image Classification on Sequential MNIST
1 code implementation • ICLR 2021 • David W. Romero, Jean-Baptiste Cordonnier
We provide a general self-attention formulation to impose group equivariance to arbitrary symmetry groups.
1 code implementation • 9 Jun 2020 • David W. Romero, Erik J. Bekkers, Jakub M. Tomczak, Mark Hoogendoorn
In this work, we fill this gap by leveraging the symmetries inherent to time-series for the construction of equivariant neural network.
1 code implementation • ICML 2020 • David W. Romero, Erik J. Bekkers, Jakub M. Tomczak, Mark Hoogendoorn
Although group convolutional networks are able to learn powerful representations based on symmetry patterns, they lack explicit means to learn meaningful relationships among them (e. g., relative positions and poses).
no code implementations • ICLR 2020 • David W. Romero, Mark Hoogendoorn
Equivariance is a nice property to have as it produces much more parameter efficient neural architectures and preserves the structure of the input through the feature mapping.