首页|基于互学习和促进分割的RGB-D显著性目标检测

基于互学习和促进分割的RGB-D显著性目标检测

扫码查看
RGB-D显著性目标检测主要通过融合RGB图像和深度图(Depth)从给定场景中分割最显著的对象。由于受到原始深度图固有噪声的影响,会导致模型在检测过程中拟入错误的信息。为了改善检测效果,该文提出了一种基于互学习和促进分割的RGB-D显著性目标检测模型,设计一个深度优化模块来获取深度图和预测深度图之间最优的深度信息;引入特征对齐模块和跨模态集成模块完成跨模态的融合;针对分割造成的精度损失问题,构建了一个基于多源特征集成机制的分离重构解码器。在5个公开数据集上进行了实验测试,实验结果表明:所提模型与其他模型相比,准确率更高,网络更加稳定。
Mutual Learning and Boosting Segmentation for RGB-D Salient Object Detection
RGB-D saliency object detection mainly segments the most salient objects from a given scene by fusing RGB images and depth maps(Depth).Being affected by the inherent noise of the original depth map can cause the model to fit in the wrong information during the detection process.To improve the detection effect,the paper proposes an RGB-D saliency object detection model based on mutual learning and facilitated segmentation.A depth optimization module is designed to obtain the optimal depth information between the depth map and the predicted depth map.A semantic alignment module and a cross-modal integration module are introduced to complete the cross-modal fusion.To address the problem of accuracy loss caused by segmentation,a separation and reconstruction decoder based on a multi-source feature integration mechanism is constructed.Experimental tests are carried out on five public datasets,and the experimental results show that the proposed model is more accurate and the network is more stable compared with other models.

RGB-D salient object detectionmutual learningsemantic alignmentcross-modal integration

夏晨星、王晶晶、葛斌

展开 >

安徽理工大学计算机科学与工程学院(安徽淮南232001)

RGB-D显著性目标检测 互学习 特征对齐 跨模态集成

2024

通化师范学院学报
通化师范学院

通化师范学院学报

影响因子:0.266
ISSN:1008-7974
年,卷(期):2024.45(6)