1 code implementation • 7 Nov 2022 • Mateus Sangalli, Samy Blusseau, Santiago Velasco-Forero, Jesus Angulo
Equivariance of neural networks to transformations helps to improve their performance and reduce generalization error in computer vision tasks, as they apply to datasets presenting symmetries (e. g. scalings, rotations, translations).
no code implementations • 10 Oct 2022 • Mateus Sangalli, Samy Blusseau, Santiago Velasco-Forero, Jesus Angulo
Therefore, this paper introduces the Scale Equivariant U-Net (SEU-Net), a U-Net that is made approximately equivariant to a semigroup of scales and translations through careful application of subsampling and upsampling layers and the use of aforementioned scale-equivariant layers.
no code implementations • 28 Jul 2022 • Samy Blusseau, Santiago Velasco-Forero, Jesus Angulo, Isabelle Bloch
In discrete signal and image processing, many dilations and erosions can be written as the max-plus and min-plus product of a matrix on a vector.
1 code implementation • 27 Jun 2022 • Mateus Sangalli, Samy Blusseau, Santiago Velasco-Forero, Jesús Angulo
Symmetry is present in many tasks in computer vision, where the same class of objects can appear transformed, e. g. rotated due to different camera orientations, or scaled due to perspective.
no code implementations • 4 May 2021 • Mateus Sangalli, Samy Blusseau, Santiago Velasco-Forero, Jesus Angulo
The translation equivariance of convolutions can make convolutional neural networks translation equivariant or invariant.
no code implementations • 20 Mar 2019 • Bastien Ponchon, Santiago Velasco-Forero, Samy Blusseau, Jesus Angulo, Isabelle Bloch
This paper addresses the issue of building a part-based representation of a dataset of images.
1 code implementation • 19 Mar 2019 • Yunxiang Zhang, Samy Blusseau, Santiago Velasco-Forero, Isabelle Bloch, Jesus Angulo
Following recent advances in morphological neural networks, we propose to study in more depth how Max-plus operators can be exploited to define morphological units and how they behave when incorporated in layers of conventional neural networks.
no code implementations • 25 May 2018 • José Lezama, Samy Blusseau, Jean-Michel Morel, Gregory Randall, Rafael Grompone von Gioi
Using a computational quantitative version of the non-accidentalness principle, we raise the possibility that the psychophysical and the (older) gestaltist setups, both applicable on dot or Gabor patterns, find a useful complement in a Turing test.