Search Results for author: Amir Ardakani

Found 3 papers, 0 papers with code

Standard Deviation-Based Quantization for Deep Neural Networks

no code implementations24 Feb 2022 Amir Ardakani, Arash Ardakani, Brett Meyer, James J. Clark, Warren J. Gross

Quantization of deep neural networks is a promising approach that reduces the inference cost, making it feasible to run deep networks on resource-restricted devices.

Quantization

Training Linear Finite-State Machines

no code implementations NeurIPS 2020 Arash Ardakani, Amir Ardakani, Warren Gross

Therefore, our FSM-based model can learn extremely long-term dependencies as it requires 1/l memory storage during training compared to LSTMs, where l is the number of time steps.

Language Modelling Time Series Analysis

The Synthesis of XNOR Recurrent Neural Networks with Stochastic Logic

no code implementations NeurIPS 2019 Arash Ardakani, Zhengyun Ji, Amir Ardakani, Warren Gross

The emergence of XNOR networks seek to reduce the model size and computational cost of neural networks for their deployment on specialized hardware requiring real-time processes with limited hardware resources.

Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.