Search Results for author: Shantanu Chakrabartty

Found 14 papers, 0 papers with code

Energy-efficiency Limits on Training AI Systems using Learning-in-Memory

no code implementations21 Feb 2024 Zihao Chen, Johannes Leugering, Gert Cauwenberghs, Shantanu Chakrabartty

In this paper, we derive new theoretical lower bounds on energy dissipation when training AI systems using different LIM approaches.

Neuromorphic Computing with AER using Time-to-Event-Margin Propagation

no code implementations27 Apr 2023 Madhuvanthi Srivatsav R, Shantanu Chakrabartty, Chetan Singh Thakur

Address-Event-Representation (AER) is a spike-routing protocol that allows the scaling of neuromorphic and spiking neural network (SNN) architectures to a size that is comparable to that of digital neural network architectures.

Multiplierless In-filter Computing for tinyML Platforms

no code implementations24 Apr 2023 Abhishek Ramdas Nair, Pallab Kumar Nath, Shantanu Chakrabartty, Chetan Singh Thakur

Wildlife conservation using continuous monitoring of environmental factors and biomedical classification, which generate a vast amount of sensor data, is a challenge due to limited bandwidth in the case of remote monitoring.

Classification

A Framework for Analyzing Cross-correlators using Price's Theorem and Piecewise-Linear Decomposition

no code implementations18 Apr 2023 Zhili Xiao, Shantanu Chakrabartty

Precise estimation of cross-correlation or similarity between two random variables lies at the heart of signal detection, hyperdimensional computing, associative memories, and neural networks.

On-device Synaptic Memory Consolidation using Fowler-Nordheim Quantum-tunneling

no code implementations27 Jun 2022 Mustafizur Rahman, Subhankar Bose, Shantanu Chakrabartty

Synaptic memory consolidation has been heralded as one of the key mechanisms for supporting continual learning in neuromorphic Artificial Intelligence (AI) systems.

Continual Learning

Process, Bias and Temperature Scalable CMOS Analog Computing Circuits for Machine Learning

no code implementations11 May 2022 Pratik Kumar, Ankita Nandi, Shantanu Chakrabartty, Chetan Singh Thakur

Analog computing is attractive compared to digital computing due to its potential for achieving higher computational density and higher energy efficiency.

BIG-bench Machine Learning

Bias-Scalable Near-Memory CMOS Analog Processor for Machine Learning

no code implementations10 Feb 2022 Pratik Kumar, Ankita Nandi, Shantanu Chakrabartty, Chetan Singh Thakur

In this paper, we demonstrate the implementation of bias-scalable approximate analog computing circuits using the generalization of the margin-propagation principle called shape-based analog computing (S-AC).

In-filter Computing For Designing Ultra-light Acoustic Pattern Recognizers

no code implementations11 Sep 2021 Abhishek Ramdas Nair, Shantanu Chakrabartty, Chetan Singh Thakur

We present a novel in-filter computing framework that can be used for designing ultra-light acoustic classifiers for use in smart internet-of-things (IoTs).

Robust classification

Using growth transform dynamical systems for spatio-temporal data sonification

no code implementations21 Aug 2021 Oindrila Chatterjee, Shantanu Chakrabartty

Sonification, or encoding information in meaningful audio signatures, has several advantages in augmenting or replacing traditional visualization methods for human-in-the-loop decision-making.

Decision Making EEG

Multiplierless MP-Kernel Machine For Energy-efficient Edge Devices

no code implementations3 Jun 2021 Abhishek Ramdas Nair, Pallab Kumar Nath, Shantanu Chakrabartty, Chetan Singh Thakur

We present a novel framework for designing multiplierless kernel machines that can be used on resource-constrained platforms like intelligent edge devices.

An Adaptive Synaptic Array using Fowler-Nordheim Dynamic Analog Memory

no code implementations13 Apr 2021 Darshit Mehta, Kenji Aono, Shantanu Chakrabartty

In this paper we present a synaptic array that uses dynamical states to implement an analog memory for energy-efficient training of machine learning (ML) systems.

Multiplierless and Sparse Machine Learning based on Margin Propagation Networks

no code implementations5 Oct 2019 Nazreen P. M., Shantanu Chakrabartty, Chetan Singh Thakur

This is because at the fundamental level, neural network and machine learning operations extensively use MVM operations and hardware compilers exploit the inherent parallelism in MVM operations to achieve hardware acceleration on GPUs and FPGAs.

BIG-bench Machine Learning Edge-computing

Resonant Machine Learning Based on Complex Growth Transform Dynamical Systems

no code implementations15 Aug 2019 Oindrila Chatterjee, Shantanu Chakrabartty

Based on this approach, this paper introduces three novel concepts: (a) A learning framework where the network's active-power dissipation is used as a regularization for a learning objective function that is subjected to zero total reactive-power constraint; (b) A dynamical system based on complex-domain, continuous-time growth transforms which optimizes the learning objective function and drives the network towards electrical resonance under steady-state operation; and (c) An annealing procedure that controls the trade-off between active-power dissipation and the speed of convergence.

BIG-bench Machine Learning Novel Concepts

A Unified Perspective of Evolutionary Game Dynamics Using Generalized Growth Transforms

no code implementations5 Nov 2018 Oindrila Chatterjee, Shantanu Chakrabartty

In this paper, we show that different types of evolutionary game dynamics are, in principle, special cases of a dynamical system model based on our previously reported framework of generalized growth transforms.

Cannot find the paper you are looking for? You can Submit a new open access paper.