首页|Encoder deep interleaved network with multi-scale aggregation for RGB-D salient object detection
Encoder deep interleaved network with multi-scale aggregation for RGB-D salient object detection
扫码查看
点击上方二维码区域,可以放大扫码查看
原文链接
NSTL
Elsevier
A B S T R A C T Recently, RGB-D salient object detection (SOD) has aroused widespread research interest. Existing RGB-D SOD approaches mainly consider the cross-modal information fusion in the decoder. And their multi modal interaction mainly concentrates on the same level of features between RGB stream and depth stream. They do not deeply explore the coherence of multi-model features at different levels. In this paper, we design a two-stream deep interleaved encoder network to extract RGB and depth information and realize their mixing simultaneously. This network allows us to gradually learn multi-modal representation at different levels from shallow to deep. Moreover, to further fuse multi-modal features in the decoding stage, we propose a cross-modal mutual guidance module and a residual multi-scale aggregation module to implement the global guidance and local refinement of the salient region. Extensive experiments on six benchmark datasets demonstrate that the proposed approach performs favorably against most stateof-the-art methods under different evaluation metrics. During the testing stage, this model can run at a real-time speed of 93 FPS.(c) 2022 Elsevier Ltd. All rights reserved.