Paper

Rethinking the Number of Channels for the Convolutional Neural Network

Latest algorithms for automatic neural architecture search perform remarkable but few of them can effectively design the number of channels for convolutional neural networks and consume less computational efforts. In this paper, we propose a method for efficient automatic architecture search which is special to the widths of networks instead of the connections of neural architecture. Our method, functionally incremental search based on function-preserving, will explore the number of channels rapidly while controlling the number of parameters of the target network. On CIFAR-10 and CIFAR-100 classification, our method using minimal computational resources (0.4~1.3 GPU-days) can discover more efficient rules of the widths of networks to improve the accuracy by about 0.5% on CIFAR-10 and a~2.33% on CIFAR-100 with fewer number of parameters. In particular, our method is suitable for exploring the number of channels of almost any convolutional neural network rapidly.

Results in Papers With Code
(↓ scroll down to see all results)