Salient Object Detection by Fusing Local and Global Contexts

You are here

IEEE Transactions on Multimedia

Top Reasons to Join SPS Today!

1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.

Salient Object Detection by Fusing Local and Global Contexts

By: 
Qinghua Ren; Shijian Lu; Jinxia Zhang; Renjie Hu

Benefiting from the powerful discriminative feature learning capability of convolutional neural networks (CNNs), deep learning techniques have achieved remarkable performance improvement for the task of salient object detection (SOD) in recent years. However, most existing deep SOD models do not fully exploit informative contextual features, which often leads to suboptimal detection performance in the presence of a cluttered background. This paper presents a context-aware attention module that detects salient objects by simultaneously constructing connections between each image pixel and its local and global contextual pixels. Specifically, each pixel and its neighbors bidirectionally exchange semantic information by computing their correlation coefficients, and this process aggregates contextual attention features both locally and globally. In addition, an attention-guided hierarchical network architecture is designed to capture fine-grained spatial details by transmitting contextual information from deeper to shallower network layers in a top-down manner. Extensive experiments on six public SOD datasets show that our proposed model demonstrates superior SOD performance against most of the current state-of-the-art models under different evaluation metrics.

SPS Social Media

IEEE SPS Educational Resources

IEEE SPS Resource Center

IEEE SPS YouTube Channel