首页|基于单目深度估计变电站场景的语义分割模型

基于单目深度估计变电站场景的语义分割模型

扫码查看
针对现有语义分割方法缺乏三维深度几何信息的有效学习,导致复杂变电站场景下对物体语义分割精度低的问题,文章提出了基于单目深度估计变电站场景的语义分割模型.该模型由DeepLab v3+辅图像语义分割模型与AdaBins Module单目深度估计模型两部分组成.首先,AdaBins Module根据可见光图像生成对应深度图,从而提取到图像中目标物体在三维空间中的深度信息.其次,采用矩阵相乘的方式将深度图中深度信息作为权重与可见光图像融合,并根据既定深度阈值弱化图像中远距离无效背景像素,以减少其在后续图像分割阶段对目标物体分割精度的影响.最后,将融合后的图像输入DeepLab v3+辅图像语义分割模型中进行语义分割.试验表明,相比基准模型,文章所提方法能更好地提取分割目标的深度轮廓特征,语义分割精度提升明显.
A Semantic Segmentation Model for Substation Scenes Based on Monocular Depth Estimation
In response to the lack of effective learning of three-dimensional deep geometric information in existing semantic segmentation methods,which leads to low accuracy in object semantic segmentation in complex substation scenes,this article proposes a semantic segmentation model based on monocular depth estimation of substation scenes.This model consists of two parts:DeepLab v3+auxiliary image semantic segmentation model and AdaBins Module monocular depth estimation model.Firstly,The AdaBins Module generates corresponding depth maps based on visible light images,thereby extracting depth information of the target object in three-dimensional space from the images.Secondly,the depth information in the depth map is fused with visible light images as weights using matrix multiplication,and the invalid background pixels in the image are weakened based on the established depth threshold to reduce their impact on the accuracy of target object segmentation in subsequent image segmentation stages.Finally,input the fused image into the DeepLab v3+auxiliary image semantic segmentation model for semantic segmentation.Experiments have shown that compared to the benchmark model,the method proposed in the article can better extract the depth contour features of segmented targets,and the semantic segmentation accuracy is significantly improved.

monocular depth estimationsemantic segmentationimage fusiondepth maptransformer substation

张娜、王大伟

展开 >

国网山西省电力公司电力科学研究院,山西太原 030002

单目深度估计 语义分割 图像融合 深度图 变电站

2024

电力系统装备
《机电商报》社

电力系统装备

影响因子:0.008
ISSN:1671-8992
年,卷(期):2024.(5)