Generative Models

k-Sparse Autoencoder

Introduced by Makhzani et al. in k-Sparse Autoencoders

k-Sparse Autoencoders are autoencoders with linear activation function, where in hidden layers only the $k$ highest activities are kept. This achieves exact sparsity in the hidden representation. Backpropagation only goes through the the top $k$ activated units. This can be achieved with a ReLU layer with an adjustable threshold.

Source: k-Sparse Autoencoders

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Classification 1 33.33%
Denoising 1 33.33%
General Classification 1 33.33%

Components


Component Type
ReLU
Activation Functions

Categories