Search Results for author: Twisha Titirsha

Found 7 papers, 0 papers with code

On the Mitigation of Read Disturbances in Neuromorphic Inference Hardware

no code implementations27 Jan 2022 Ankita Paul, Shihao Song, Twisha Titirsha, Anup Das

Our analysis show both a strong dependency on model characteristics such as synaptic activation and criticality, and on the voltage used to read resistance states during inference.

NeuroXplorer 1.0: An Extensible Framework for Architectural Exploration with Spiking Neural Networks

no code implementations4 May 2021 Adarsha Balaji, Shihao Song, Twisha Titirsha, Anup Das, Jeffrey Krichmar, Nikil Dutt, James Shackleford, Nagarajan Kandasamy, Francky Catthoor

Recently, both industry and academia have proposed many different neuromorphic architectures to execute applications that are designed with Spiking Neural Network (SNN).

On the Role of System Software in Energy Management of Neuromorphic Computing

no code implementations22 Mar 2021 Twisha Titirsha, Shihao Song, Adarsha Balaji, Anup Das

Based on such formulation, we first evaluate the role of a system software in managing the energy consumption of neuromorphic systems.

BIG-bench Machine Learning energy management +1

Endurance-Aware Mapping of Spiking Neural Networks to Neuromorphic Hardware

no code implementations9 Mar 2021 Twisha Titirsha, Shihao Song, Anup Das, Jeffrey Krichmar, Nikil Dutt, Nagarajan Kandasamy, Francky Catthoor

We propose eSpine, a novel technique to improve lifetime by incorporating the endurance variation within each crossbar in mapping machine learning workloads, ensuring that synapses with higher activation are always implemented on memristors with higher endurance, and vice versa.

graph partitioning

Thermal-Aware Compilation of Spiking Neural Networks to Neuromorphic Hardware

no code implementations9 Oct 2020 Twisha Titirsha, Anup Das

Such current variations create a thermal gradient within each crossbar of the hardware, depending on the machine learning workload and the mapping of neurons and synapses of the workload to these crossbars.

BIG-bench Machine Learning Total Energy

Reliability-Performance Trade-offs in Neuromorphic Computing

no code implementations26 Sep 2020 Twisha Titirsha, Anup Das

A major source of voltage drop in a crossbar of these architectures are the parasitic components on the crossbar's bitlines and wordlines, which are deliberately made longer to achieve lower cost-per-bit.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.