Salient Object Detection(SOD)aims to recognize and segment visual salient objects in images,which is one of the important research contents in computer vision tasks and related fields.Existing Fully Convolutional Networks(FCNs)-based SOD methods have achieved good performance.However,the types and sizes of salient objects are variable and unfixed in real-world scenes,which makes it still a huge challenge to detect and segment salient objects accurately and completely.For that,in this paper,a novel integrating multiple context and hybrid interaction for SOD task is proposed to efficiently predict salient objects by collaborating Dense Context Information Exploration(DCIE)module and Multi-source Feature Hybrid Interaction(MFHI)module.The DCIE module uses dilated convolution,asymmetric convolution and dense guided connection to progressively capture the strongly correlated multi-scale and multi-receptive field context information,and enhances the expression ability of each initial input feature by aggregating context information.The MFHI module contains diverse feature aggregation operations,which can adaptively interact with complementary information from multi-level features to generate high-quality feature representations for accurately predicting saliency maps.The performance of the proposed method is tested on five public datasets.The performance of the proposed method is tested on five public datasets.Experimental results demonstrate that our method achieves superior prediction performance compared with 19 state-of-the-art SOD methods under different evaluation metrics.
关键词
计算机视觉/显著性目标检测/全卷积网络/上下文信息
Key words
Computer vision/Salient Object Detection(SOD)/Fully Convolutional Networks(FCNs)/Context information