no code implementations • 12 Mar 2024 • Geonhwa Jeong, Po-An Tsai, Abhimanyu R. Bambhaniya, Stephen W. Keckler, Tushar Krishna
Next, we develop a software framework, TASDER, to accelerate DNNs by searching layer-wise, high-quality structured decomposition for both weight and activation tensors so that they can be accelerated by any systems with structured sparse hardware support.
no code implementations • 22 May 2023 • Yannan Nellie Wu, Po-An Tsai, Saurav Muralidharan, Angshuman Parashar, Vivienne Sze, Joel S. Emer
Due to complex interactions among various deep neural network (DNN) optimization techniques, modern DNNs can have weights and activations that are dense or sparse with diverse sparsity degrees.
1 code implementation • 7 Oct 2022 • Sheng-Chun Kao, Angshuman Parashar, Po-An Tsai, Tushar Krishna
Map Space Exploration is the problem of finding optimized mappings of a Deep Neural Network (DNN) model on an accelerator.
no code implementations • 12 May 2022 • Yannan Nellie Wu, Po-An Tsai, Angshuman Parashar, Vivienne Sze, Joel S. Emer
This paper first presents a unified taxonomy to systematically describe the diverse sparse tensor accelerator design space.
no code implementations • 15 Sep 2021 • Geonhwa Jeong, Gokcen Kestor, Prasanth Chatarasi, Angshuman Parashar, Po-An Tsai, Sivasankaran Rajamanickam, Roberto Gioiosa, Tushar Krishna
The algorithms and accelerator cost models are connected via a novel mapping abstraction that captures the map space of spatial accelerators which can be systematically pruned based on constraints from the hardware, workload, and mapper.
1 code implementation • 2 Mar 2021 • Kartik Hegde, Po-An Tsai, Sitao Huang, Vikas Chandra, Angshuman Parashar, Christopher W. Fletcher
The key idea is to derive a smooth, differentiable approximation to the otherwise non-smooth, non-convex search space.