no code implementations • 21 Feb 2024 • Zihao Chen, Johannes Leugering, Gert Cauwenberghs, Shantanu Chakrabartty
In this paper, we derive new theoretical lower bounds on energy dissipation when training AI systems using different LIM approaches.
no code implementations • 9 Feb 2024 • Ali Safa, Vikrant Jaltare, Samira Sebt, Kameron Gano, Johannes Leugering, Georges Gielen, Gert Cauwenberghs
Our results show that the proposed approach strongly outperforms the use of backprop by up to $27\%$ higher accuracy when subject to strong hardware non-idealities.
no code implementations • 20 Oct 2023 • Jaeseoung Park, Ashwani Kumar, Yucheng Zhou, Sangheon Oh, Jeong-Hoon Kim, Yuhan Shi, Soumil Jain, Gopabandhu Hota, Amelie L. Nagle, Catherine D. Schuman, Gert Cauwenberghs, Duygu Kuzum
To address all these challenges, we developed a forming-free and bulk switching RRAM technology based on a trilayer metal-oxide stack.
no code implementations • 27 Sep 2023 • Yuhong Zhang, Qin Li, Sujal Nahata, Tasnia Jamal, Shih-kuen Cheng, Gert Cauwenberghs, Tzyy-Ping Jung
With the recent proliferation of large language models (LLMs), such as Generative Pre-trained Transformers (GPT), there has been a significant shift in exploring human and machine comprehension of semantic language meaning.
no code implementations • 27 Sep 2023 • Gwenevere Frank, Seyed Yahya Shirazi, Jason Palmer, Gert Cauwenberghs, Scott Makeig, Arnaud Delorme
A multitude of ICA algorithms for EEG decomposition exist, and in the past, their relative effectiveness has been studied.
no code implementations • 8 Jun 2023 • Ali Safa, Tim Verbelen, Lars Keuninckx, Ilja Ocket, André Bourdoux, Francky Catthoor, Georges Gielen, Gert Cauwenberghs
This work studies how brain-inspired neural ensembles equipped with local Hebbian plasticity can perform active inference (AIF) in order to control dynamical agents.
1 code implementation • 10 Apr 2023 • Jason Yik, Korneel Van den Berghe, Douwe den Blanken, Younes Bouhadjar, Maxime Fabre, Paul Hueber, Denis Kleyko, Noah Pacik-Nelson, Pao-Sheng Vincent Sun, Guangzhi Tang, Shenqi Wang, Biyan Zhou, Soikat Hasan Ahmed, George Vathakkattil Joseph, Benedetto Leto, Aurora Micheli, Anurag Kumar Mishra, Gregor Lenz, Tao Sun, Zergham Ahmed, Mahmoud Akl, Brian Anderson, Andreas G. Andreou, Chiara Bartolozzi, Arindam Basu, Petrut Bogdan, Sander Bohte, Sonia Buckley, Gert Cauwenberghs, Elisabetta Chicca, Federico Corradi, Guido de Croon, Andreea Danielescu, Anurag Daram, Mike Davies, Yigit Demirag, Jason Eshraghian, Tobias Fischer, Jeremy Forest, Vittorio Fra, Steve Furber, P. Michael Furlong, William Gilpin, Aditya Gilra, Hector A. Gonzalez, Giacomo Indiveri, Siddharth Joshi, Vedant Karia, Lyes Khacef, James C. Knight, Laura Kriener, Rajkumar Kubendran, Dhireesha Kudithipudi, Yao-Hong Liu, Shih-Chii Liu, Haoyuan Ma, Rajit Manohar, Josep Maria Margarit-Taulé, Christian Mayr, Konstantinos Michmizos, Dylan Muir, Emre Neftci, Thomas Nowotny, Fabrizio Ottati, Ayca Ozcelikkale, Priyadarshini Panda, Jongkil Park, Melika Payvand, Christian Pehle, Mihai A. Petrovici, Alessandro Pierro, Christoph Posch, Alpha Renner, Yulia Sandamirskaya, Clemens JS Schaefer, André van Schaik, Johannes Schemmel, Samuel Schmidgall, Catherine Schuman, Jae-sun Seo, Sadique Sheik, Sumit Bam Shrestha, Manolis Sifalakis, Amos Sironi, Matthew Stewart, Kenneth Stewart, Terrence C. Stewart, Philipp Stratmann, Jonathan Timcheck, Nergis Tömen, Gianvito Urgese, Marian Verhelst, Craig M. Vineyard, Bernhard Vogginger, Amirreza Yousefzadeh, Fatima Tuz Zohora, Charlotte Frenkel, Vijay Janapa Reddi
The NeuroBench framework introduces a common set of tools and systematic methodology for inclusive benchmark measurement, delivering an objective reference framework for quantifying neuromorphic approaches in both hardware-independent (algorithm track) and hardware-dependent (system track) settings.
no code implementations • 17 Aug 2021 • Weier Wan, Rajkumar Kubendran, Clemens Schaefer, S. Burc Eryilmaz, Wenqiang Zhang, Dabin Wu, Stephen Deiss, Priyanka Raina, He Qian, Bin Gao, Siddharth Joshi, Huaqiang Wu, H. -S. Philip Wong, Gert Cauwenberghs
Realizing today's cloud-level artificial intelligence functionalities directly on devices distributed at the edge of the internet calls for edge hardware capable of processing multiple modalities of sensory data (e. g. video, audio) at unprecedented energy-efficiency.
no code implementations • 23 May 2018 • Chetan Singh Thakur, Jamal Molin, Gert Cauwenberghs, Giacomo Indiveri, Kundan Kumar, Ning Qiao, Johannes Schemmel, Runchun Wang, Elisabetta Chicca, Jennifer Olson Hasler, Jae-sun Seo, Shimeng Yu, Yu Cao, André van Schaik, Ralph Etienne-Cummings
Neuromorphic engineering (NE) encompasses a diverse range of approaches to information processing that are inspired by neurobiological systems, and this feature distinguishes neuromorphic systems from conventional computing systems.
no code implementations • 17 Nov 2017 • Hesham Mostafa, Vishwajith Ramesh, Gert Cauwenberghs
Updating the features or weights in one layer, however, requires waiting for the propagation of error signals from higher layers.
no code implementations • 29 Sep 2017 • Georgios Detorakis, Sadique Sheik, Charles Augustine, Somnath Paul, Bruno U. Pedroni, Nikil Dutt, Jeffrey Krichmar, Gert Cauwenberghs, Emre Neftci
Embedded, continual learning for autonomous and adaptive behavior is a key application of neuromorphic hardware.
no code implementations • 14 Aug 2017 • Hesham Mostafa, Gert Cauwenberghs
This allows us to use the proposed networks in a variational learning setting where stochastic backpropagation is used to optimize a lower bound on the data log likelihood, thereby learning a generative model of the data.
no code implementations • 15 Jun 2017 • Hesham Mostafa, Bruno Pedroni, Sadique Sheik, Gert Cauwenberghs
In this paper, we describe a hardware-efficient on-line learning technique for feedforward multi-layer ANNs that is based on pipelined backpropagation.
no code implementations • 5 Jan 2017 • Sadique Sheik, Somnath Paul, Charles Augustine, Gert Cauwenberghs
Several learning rules for synaptic plasticity, that depend on either spike timing or internal state variables, have been proposed in the past imparting varying computational capabilities to Spiking Neural Networks.
no code implementations • 27 Sep 2016 • S. Burc Eryilmaz, Emre Neftci, Siddharth Joshi, Sang-Bum Kim, Matthew BrightSky, Hsiang-Lan Lung, Chung Lam, Gert Cauwenberghs, H. -S. Philip Wong
Current large scale implementations of deep learning and data mining require thousands of processors, massive amounts of off-chip memory, and consume gigajoules of energy.
no code implementations • 11 Jul 2016 • Bruno U. Pedroni, Sadique Sheik, Siddharth Joshi, Georgios Detorakis, Somnath Paul, Charles Augustine, Emre Neftci, Gert Cauwenberghs
We present a novel method for realizing both causal and acausal weight updates using only forward lookup access of the synaptic connectivity table, permitting memory-efficient implementation.
no code implementations • 14 Nov 2015 • Emre O. Neftci, Bruno U. Pedroni, Siddharth Joshi, Maruan Al-Shedivat, Gert Cauwenberghs
Recent studies have shown that synaptic unreliability is a robust and sufficient mechanism for inducing the stochasticity observed in cortex.
no code implementations • 24 Sep 2015 • Bruno U. Pedroni, Srinjoy Das, John V. Arthur, Paul A. Merolla, Bryan L. Jackson, Dharmendra S. Modha, Kenneth Kreutz-Delgado, Gert Cauwenberghs
For this, we first propose a method of producing the Gibbs sampler using bio-inspired digital noisy integrate-and-fire neurons.
no code implementations • 26 Mar 2015 • Srinjoy Das, Bruno Umbria Pedroni, Paul Merolla, John Arthur, Andrew S. Cassidy, Bryan L. Jackson, Dharmendra Modha, Gert Cauwenberghs, Ken Kreutz-Delgado
Restricted Boltzmann Machines and Deep Belief Networks have been successfully used in a wide variety of applications including image classification and speech recognition.
no code implementations • 23 Dec 2014 • Maruan Al-Shedivat, Emre Neftci, Gert Cauwenberghs
These mappings are encoded in a distribution over a (possibly infinite) collection of models.
no code implementations • 5 Nov 2013 • Emre Neftci, Srinjoy Das, Bruno Pedroni, Kenneth Kreutz-Delgado, Gert Cauwenberghs
However the traditional RBM architecture and the commonly used training algorithm known as Contrastive Divergence (CD) are based on discrete updates and exact arithmetics which do not directly map onto a dynamical neural substrate.