首页|基于无监督域适应的室外点云语义分割

基于无监督域适应的室外点云语义分割

扫码查看
为处理室外大规模场景中语义分割网络训练需求数据量过大的问题,提出一种基于无监督域自适应的点云语义分割方法.该方法使用改进的RandLA-Net以SPTLS3D真实世界数据集的少量点云作为目标对象进行语义分割.模型在SensatUrban数据集上完成分割网络的预训练,通过缩小源域和目标域之间的域差距来完成模型的迁移.RandLA-Net编码过程会缺失原始点云全局特征,因此本文提出一种额外获取全局信息加入网络解码的方法.此外,为增强差异化信息的获取,RandLA-Net的局部注意力模块权值改为根据各点的特征和其邻域的平均特征的差值.实验显示,该网络在Se-manticKITTI数据集上的平均交并比精度达到54.3%,在Semantic3D上的平均交并比精度达到了71.91%.预训练好的模型经过微调后平均交并比精度达到了80.05%,比直接训练的效果好8.83个百分点.
Unsupervised Domain Adaptation for Outdoor Point Cloud Semantic Segmentation
An unsupervised domain adaptation for LiDAR semantic segmentation method is proposed to deal with the problem of excessive data required for semantic segmentation network training in outdoor large-scale scenes.The method uses a modified RandLA-Net for semantic segmentation using a small number of point clouds from the SPTLS3D's real world data as target ob-jects.The model finishes the pre-training of the segmentation network on SensatUrban,and completes the transfer task by mini-mizing the domain gap between the source and target domains.The RandLA-Net losses the global features of the original point cloud in the encoding process,so an additional method of obtaining global information to join the network decoding is proposed.In addition,for getting the differentiated information,the weights of the local attention module of RandLA-Net is changed to use the difference between the features of each point and the average features of its neighbors.The experiments show that the mean in-tersection over union of the network are 54.3%on SemanticKITTI and 71.91%on Semantic3D.The mIoU of the pre-trained net-work after fine-tuning are 80.05%,which is 8.83 percentage points better than training directly.

point cloud semantic segmentationunsupervised domain adaptationtransfer learningfine-tunedeep learning

胡崇佳、刘金洲、方立

展开 >

福州大学电气工程与自动化学院,福建 福州 350108

中国科学院海西研究院泉州装备制造研究中心,福建 泉州 362200

点云语义分割 无监督领域自适应 迁移学习 微调 深度学习

泉州市科技计划项目国家自然科学基金青年科学基金资助项目

2020C003R42101359

2024

计算机与现代化
江西省计算机学会 江西省计算技术研究所

计算机与现代化

CSTPCD
影响因子:0.472
ISSN:1006-2475
年,卷(期):2024.(1)
  • 2