Paper

Dense Recurrent Neural Networks for Scene Labeling

Recently recurrent neural networks (RNNs) have demonstrated the ability to improve scene labeling through capturing long-range dependencies among image units. In this paper, we propose dense RNNs for scene labeling by exploring various long-range semantic dependencies among image units. In comparison with existing RNN based approaches, our dense RNNs are able to capture richer contextual dependencies for each image unit via dense connections between each pair of image units, which significantly enhances their discriminative power. Besides, to select relevant and meanwhile restrain irrelevant dependencies for each unit from dense connections, we introduce an attention model into dense RNNs. The attention model enables automatically assigning more importance to helpful dependencies while less weight to unconcerned dependencies. Integrating with convolutional neural networks (CNNs), our method achieves state-of-the-art performances on the PASCAL Context, MIT ADE20K and SiftFlow benchmarks.

Results in Papers With Code
(↓ scroll down to see all results)