Mutual Learning and Boosting Segmentation for RGB-D Salient Object Detection
RGB-D saliency object detection mainly segments the most salient objects from a given scene by fusing RGB images and depth maps(Depth).Being affected by the inherent noise of the original depth map can cause the model to fit in the wrong information during the detection process.To improve the detection effect,the paper proposes an RGB-D saliency object detection model based on mutual learning and facilitated segmentation.A depth optimization module is designed to obtain the optimal depth information between the depth map and the predicted depth map.A semantic alignment module and a cross-modal integration module are introduced to complete the cross-modal fusion.To address the problem of accuracy loss caused by segmentation,a separation and reconstruction decoder based on a multi-source feature integration mechanism is constructed.Experimental tests are carried out on five public datasets,and the experimental results show that the proposed model is more accurate and the network is more stable compared with other models.