no code implementations • 4 May 2024 • Jiaqi Lin, Malyaban Bal, Abhronil Sengupta
These results are comparable to those of convergent RNNs and SNNs trained by BPTT.
no code implementations • 4 May 2024 • Malyaban Bal, Yi Jiang, Abhronil Sengupta
Despite the growing prevalence of large language model (LLM) architectures, a crucial concern persists regarding their energy and power consumption, which still lags far behind the remarkable energy efficiency of the human brain.
no code implementations • 1 Feb 2024 • Jiaqi Lin, Sen Lu, Malyaban Bal, Abhronil Sengupta
However, training SNNs is challenging due to the non-differentiable nature of the spiking mechanism.
no code implementations • 18 Dec 2023 • Md Zesun Ahmed Mia, Malyaban Bal, Abhronil Sengupta
Preliminary attempts at incorporating the critical role of astrocytes - cells that constitute more than 50% of human brain cells - in brain-inspired neuromorphic computing remain in infancy.
1 code implementation • 21 Aug 2023 • Malyaban Bal, Abhronil Sengupta
Moreover, the convergence of average spiking rate of neurons at equilibrium is utilized to develop a novel ANN-SNN knowledge distillation based technique wherein we use a pre-trained BERT model as "teacher" to train our "student" spiking architecture.
1 code implementation • 14 Sep 2022 • Malyaban Bal, Abhronil Sengupta
However, by definition, EP requires the input to the model (a convergent RNN) to be static in both the phases of training.