1 code implementation • 30 Aug 2022 • Xiangzhong Luo, Di Liu, Hao Kong, Shuo Huai, Hui Chen, Weichen Liu
Benefiting from the search efficiency, differentiable neural architecture search (NAS) has evolved as the most dominant alternative to automatically design competitive deep neural networks (DNNs).
no code implementations • 19 Jan 2022 • Shien Zhu, Luan H. K. Duong, Hui Chen, Di Liu, Weichen Liu
Quantization is applied to reduce the latency and storage cost of CNNs.
no code implementations • 11 Mar 2021 • Xiangzhong Luo, Di Liu, Shuo Huai, Weichen Liu
In this paper, we present a novel multi-objective hardware-aware neural architecture search (NAS) framework, namely HSCoNAS, to automate the design of deep neural networks (DNNs) with high accuracy but low latency upon target hardware.
Hardware Aware Neural Architecture Search Neural Architecture Search
no code implementations • 25 Nov 2020 • Di Liu, Hao Kong, Xiangzhong Luo, Weichen Liu, Ravi Subramaniam
To bridge the gap, a plethora of deep learning techniques and optimization methods are proposed in the past years: light-weight deep learning models, network compression, and efficient neural architecture search.
no code implementations • 18 May 2020 • Fuyuan Lyu, Shien Zhu, Weichen Liu
However, these filter-wise quantification methods exist a natural upper limit, caused by the size of the kernel.