k-Sparse Autoencoders are autoencoders with linear activation function, where in hidden layers only the $k$ highest activities are kept. This achieves exact sparsity in the hidden representation. Backpropagation only goes through the the top $k$ activated units. This can be achieved with a ReLU layer with an adjustable threshold.
Source: k-Sparse AutoencodersPaper | Code | Results | Date | Stars |
---|