Search Results for author: Dmitry Krotov

Found 16 papers, 7 papers with code

Neuron-Astrocyte Associative Memory

no code implementations14 Nov 2023 Leo Kozachkov, Jean-Jacques Slotine, Dmitry Krotov

Such multi-neuron synapses are ubiquitous in models of Dense Associative Memory (also known as Modern Hopfield Networks) and are known to lead to superlinear memory storage capacity, which is a desirable computational feature.

Long Sequence Hopfield Memory

1 code implementation NeurIPS 2023 Hamza Tahir Chaudhry, Jacob A. Zavatone-Veth, Dmitry Krotov, Cengiz Pehlevan

Sequence memory is an essential attribute of natural and artificial intelligence that enables agents to encode, store, and retrieve complex sequences of stimuli and actions.

Attribute

End-to-end Differentiable Clustering with Associative Memories

1 code implementation5 Jun 2023 Bishwajit Saha, Dmitry Krotov, Mohammed J. Zaki, Parikshit Ram

Clustering is a widely used unsupervised learning technique involving an intensive discrete optimization problem.

Clustering

Sparse Distributed Memory is a Continual Learner

1 code implementation20 Mar 2023 Trenton Bricken, Xander Davies, Deepak Singh, Dmitry Krotov, Gabriel Kreiman

Continual learning is a problem for artificial neural networks that their biological counterparts are adept at solving.

Continual Learning

Energy Transformer

4 code implementations NeurIPS 2023 Benjamin Hoover, Yuchen Liang, Bao Pham, Rameswar Panda, Hendrik Strobelt, Duen Horng Chau, Mohammed J. Zaki, Dmitry Krotov

Our work combines aspects of three promising paradigms in machine learning, namely, attention mechanism, energy-based models, and associative memory.

Graph Anomaly Detection Graph Classification

Associative Learning for Network Embedding

no code implementations30 Aug 2022 Yuchen Liang, Dmitry Krotov, Mohammed J. Zaki

The network embedding task is to represent the node in the network as a low-dimensional vector while incorporating the topological and structural information.

Network Embedding Node Classification

Hierarchical Associative Memory

1 code implementation14 Jul 2021 Dmitry Krotov

Dense Associative Memories or Modern Hopfield Networks have many appealing properties of associative memory.

Can a Fruit Fly Learn Word Embeddings?

2 code implementations ICLR 2021 Yuchen Liang, Chaitanya K. Ryali, Benjamin Hoover, Leopold Grinberg, Saket Navlakha, Mohammed J. Zaki, Dmitry Krotov

In this work we study a mathematical formalization of this network motif and apply it to learning the correlational structure between words and their context in a corpus of unstructured text, a common natural language processing (NLP) task.

Document Classification Word Embeddings +2

Large Associative Memory Problem in Neurobiology and Machine Learning

no code implementations ICLR 2021 Dmitry Krotov, John Hopfield

We show that these models are effective descriptions of a more microscopic (written in terms of biological degrees of freedom) theory that has additional (hidden) neurons and only requires two-body interactions between them.

BIG-bench Machine Learning Retrieval +1

Bio-Inspired Hashing for Unsupervised Similarity Search

no code implementations ICML 2020 Chaitanya K. Ryali, John J. Hopfield, Leopold Grinberg, Dmitry Krotov

Building on inspiration from FlyHash and the ubiquity of sparse expansive representations in neurobiology, our work proposes a novel hashing algorithm BioHash that produces sparse high dimensional hash codes in a data-driven manner.

Local Unsupervised Learning for Image Analysis

no code implementations NeurIPS Workshop Neuro_AI 2019 Leopold Grinberg, John Hopfield, Dmitry Krotov

Local Hebbian learning is believed to be inferior in performance to end-to-end training using a backpropagation algorithm.

Unsupervised Learning by Competing Hidden Units

no code implementations26 Jun 2018 Dmitry Krotov, John Hopfield

It is widely believed that the backpropagation algorithm is essential for learning good feature detectors in early layers of artificial neural networks, so that these detectors are useful for the task performed by the higher layers of that neural network.

Dense Associative Memory is Robust to Adversarial Inputs

no code implementations4 Jan 2017 Dmitry Krotov, John J. Hopfield

Third, adversarial images constructed by models with small power of the interaction vertex, which are equivalent to DNN with rectified linear units (ReLU), fail to transfer to and fool the models with higher order interactions.

Semantic Similarity Semantic Textual Similarity

Dense Associative Memory for Pattern Recognition

2 code implementations NeurIPS 2016 Dmitry Krotov, John J. Hopfield

The proposed duality makes it possible to apply energy-based intuition from associative memory to analyze computational properties of neural networks with unusual activation functions - the higher rectified polynomials which until now have not been used in deep learning.

Cannot find the paper you are looking for? You can Submit a new open access paper.