Brain-like approaches to unsupervised learning of hidden representations -- a comparative study

6 May 2020  ·  Naresh Balaji Ravichandran, Anders Lansner, Pawel Herman ·

Unsupervised learning of hidden representations has been one of the most vibrant research directions in machine learning in recent years. In this work we study the brain-like Bayesian Confidence Propagating Neural Network (BCPNN) model, recently extended to extract sparse distributed high-dimensional representations. The usefulness and class-dependent separability of the hidden representations when trained on MNIST and Fashion-MNIST datasets is studied using an external linear classifier and compared with other unsupervised learning methods that include restricted Boltzmann machines and autoencoders.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here