首页|用于三维点云语义分割的边增强图卷积网络

用于三维点云语义分割的边增强图卷积网络

Augmented Edge Graph Convolutional Networks for Semantic Segmentation of 3D Point Clouds

扫码查看
现在大多数基于图卷积的点云语义分割方法忽略了边构建的重要性,不能充分地表示局部区域的特征.基于此,提出一种结合注意力机制的边增强的图卷积网络(AE-GCN).首先,将邻居点特征加入边中,而不仅仅是中心点与邻居点的特征差异;其次,加入注意力机制保证点云的局部信息得到更充分的利用;最后,采用U-Shape的分割结构,确保网络更好适应点云的语义分割这一任务.在两个公开数据集Toronto_3D和S3DIS上的实验结果表明,与目前的大多数方法相比,AE-GCN取得了具有竞争力的结果:在Toronto_3D数据集的平均交并比为 80.3%,总体准确度为 97.1%;在S3DIS数据集的平均交并比为68.0%,总体准确度为87.2%.
Currently,most point cloud semantic segmentation methods based on graph convolution overlook the critical aspect of edge construction,resulting in an incomplete representation of the features of local regions.To address this limitation,we propose a novel graph convolutional network AE-GCN that integrates edge enhancement with an attention mechanism.First,we incorporate neighboring point features into the edges rather than solely considering feature differences between the central point and its neighboring points.Second,introducing an attention mechanism ensures a more comprehensive utilization of local information within the point cloud.Finally,we employ a U-Shape segmentation structure to improve the network's semantic point cloud segmentation adaptability.Our experiments on two public datasets,Toronto_3D and S3DIS,demonstrate that AE-GCN outperforms most current methods.Specifically,on the Toronto_3D dataset,AE-GCN achieves a competitive average intersection-to-union ratio of 80.3%and an overall accuracy of 97.1%.Furthermore,on the S3DIS dataset,the model attains an average intersection-to-union ratio of 68.0%and an overall accuracy of 87.2%.

three-dimensional image processingpoint cloud semantic segmentationattention mechanismaugmented edgegraph convolutional

张鲁建、毕远伟、刘耀文、黄延森

展开 >

烟台大学计算机与控制工程学院,山东 烟台 264000

三维图像处理 点云语义分割 注意力机制 边增强 图卷积

国家自然科学基金山东省自然科学基金山东省青年创新科技支持计划

62272405ZR2022MF2382021KJ080

2024

激光与光电子学进展
中国科学院上海光学精密机械研究所

激光与光电子学进展

CSTPCD北大核心
影响因子:1.153
ISSN:1006-4125
年,卷(期):2024.61(8)
  • 37