no code implementations • 2 Feb 2021 • Mats L. Richter, Wolf Byttner, Ulf Krumnack, Ludwdig Schallner, Justin Shenk
Fully convolutional neural networks can process input of arbitrary size by applying a combination of downsampling and pooling.
2 code implementations • 15 Jun 2020 • Mats L. Richter, Justin Shenk, Wolf Byttner, Anders Arpteg, Mikael Huss
First, we show that a layer's output can be restricted to the eigenspace of its variance matrix without performance loss.
1 code implementation • 19 Jul 2019 • Justin Shenk, Mats L. Richter, Anders Arpteg, Mikael Huss
We propose a metric, Layer Saturation, defined as the proportion of the number of eigenvalues needed to explain 99% of the variance of the latent representations, for analyzing the learned representations of neural network layers.