AttentiveNAS: Improving Neural Architecture Search via Attentive Sampling

CVPR 2021  ·  Dilin Wang, Meng Li, Chengyue Gong, Vikas Chandra ·

Neural architecture search (NAS) has shown great promise in designing state-of-the-art (SOTA) models that are both accurate and efficient. Recently, two-stage NAS, e.g. BigNAS, decouples the model training and searching process and achieves remarkable search efficiency and accuracy. Two-stage NAS requires sampling from the search space during training, which directly impacts the accuracy of the final searched models. While uniform sampling has been widely used for its simplicity, it is agnostic of the model performance Pareto front, which is the main focus in the search process, and thus, misses opportunities to further improve the model accuracy. In this work, we propose AttentiveNAS that focuses on improving the sampling strategy to achieve better performance Pareto. We also propose algorithms to efficiently and effectively identify the networks on the Pareto during training. Without extra re-training or post-processing, we can simultaneously obtain a large number of networks across a wide range of FLOPs. Our discovered model family, AttentiveNAS models, achieves top-1 accuracy from 77.3% to 80.7% on ImageNet, and outperforms SOTA models, including BigNAS and Once-for-All networks. We also achieve ImageNet accuracy of 80.1% with only 491 MFLOPs. Our training code and pretrained models are available at https://github.com/facebookresearch/AttentiveNAS.

PDF Abstract CVPR 2021 PDF CVPR 2021 Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Neural Architecture Search ImageNet AttentiveNAS-A5 Top-1 Error Rate 19.9 # 21
Accuracy 80.1 # 16
MACs 491M # 118
Neural Architecture Search ImageNet AttentiveNAS-A2 Top-1 Error Rate 21.2 # 41
Accuracy 78.8 # 32
MACs 317M # 92
Neural Architecture Search ImageNet AttentiveNAS-A3 Top-1 Error Rate 20.9 # 35
Accuracy 79.1 # 27
MACs 357M # 105
Neural Architecture Search ImageNet AttentiveNAS-A4 Top-1 Error Rate 20.2 # 25
Accuracy 79.8 # 20
MACs 444M # 114
Neural Architecture Search ImageNet AttentiveNAS-A1 Top-1 Error Rate 21.6 # 47
Accuracy 78.4 # 36
MACs 279M # 86
Neural Architecture Search ImageNet AttentiveNAS-A0 Top-1 Error Rate 22.7 # 66
Accuracy 77.3 # 54
MACs 203M # 71

Methods