Differentiable Channel Sparsity Search via Weight Sharing within Filters

28 Oct 2020  ·  Yu Zhao, Chung-Kuei Lee ·

In this paper, we propose the differentiable channel sparsity search (DCSS) for convolutional neural networks. Unlike traditional channel pruning algorithms which require users to manually set prune ratios for each convolutional layer, DCSS automatically searches the optimal combination of sparsities. Inspired by the differentiable architecture search (DARTS), we draw lessons from the continuous relaxation and leverage the gradient information to balance the computational cost and metrics. Since directly applying the scheme of DARTS causes shape mismatching and excessive memory consumption, we introduce a novel technique called weight sharing within filters. This technique elegantly eliminates the problem of shape mismatching with negligible additional resources. We conduct comprehensive experiments on not only image classification but also find-grained tasks including semantic segmentation and image super resolution to verify the effectiveness of DCSS. Compared with previous network pruning approaches, DCSS achieves state-of-the-art results for image classification. Experimental results of semantic segmentation and image super resolution indicate that task-specific search achieves better performance than transferring slim models, demonstrating the wide applicability and high efficiency of DCSS.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods