Progressive Attention Guided Recurrent Network for Salient Object Detection

Effective convolutional features play an important role in saliency estimation but how to learn powerful features for saliency is still a challenging task. FCN-based methods directly apply multi-level convolutional features without distinction, which leads to sub-optimal results due to the distraction from redundant details. In this paper, we propose a novel attention guided network which selectively integrates multi-level contextual information in a progressive manner. Attentive features generated by our network can alleviate distraction of background thus achieve better performance. On the other hand, it is observed that most of existing algorithms conduct salient object detection by exploiting side-output features of the backbone feature extraction network. However, shallower layers of backbone network lack the ability to obtain global semantic information, which limits the effective feature learning. To address the problem, we introduce multi-path recurrent feedback to enhance our proposed progressive attention driven framework. Through multi-path recurrent connections, global semantic information from the top convolutional layer is transferred to shallower layers, which intrinsically refines the entire network. Experimental results on six benchmark datasets demonstrate that our algorithm performs favorably against the state-of-the-art approaches.

PDF Abstract

Results from the Paper


Ranked #12 on RGB Salient Object Detection on DUTS-TE (max F-measure metric)

     Get a GitHub badge

Results from Other Papers


Task Dataset Model Metric Name Metric Value Rank Source Paper Compare
RGB Salient Object Detection DUTS-TE PAGR MAE 0.055 # 21
max F-measure 0.854 # 12

Methods


No methods listed for this paper. Add relevant methods here