计算机与现代化2024,Issue(11) :106-112.DOI:10.3969/j.issn.1006-2475.2024.11.016

基于改进自注意力机制的多视图三维重建

Multi-view 3D Reconstruction Based on Improved Self-attention Mechanism

祁贤 刘大铭 常佳鑫
计算机与现代化2024,Issue(11) :106-112.DOI:10.3969/j.issn.1006-2475.2024.11.016

基于改进自注意力机制的多视图三维重建

Multi-view 3D Reconstruction Based on Improved Self-attention Mechanism

祁贤 1刘大铭 1常佳鑫1
扫码查看

作者信息

  • 1. 宁夏大学电子与电气工程学院,宁夏 银川 750021
  • 折叠

摘要

针对目前多视图三维重建无法适应高分辨率场景、重建完整性差、忽略全局背景信息等问题,提出一种融合可变形卷积与改进自注意力机制的三维重建网络MVFSAM-CasMVSNet.首先,设计专用于多视图立体重建任务的可变形卷积模块,自适应地调整提取特征的范围,增强深度突变的特征提取能力.其次,考虑到多视图间深度信息的关联性和特征交互,设计一种多视图融合自注意力模块,通过计算复杂度较低的线性自注意力聚合每个视图内部的远程上下文信息,并通过改进的多头注意力捕获参考视图与源视图间的深度依赖关系.最后利用多阶段策略构建匹配代价体并对其进行正则化,使用具有更高分辨率的代价体生成深度图.在DTU数据集上的测试结果显示,该网络与基准模型相比,完整性、准确性、整体性分别提升15.6%、7.4%、11.8%,与现有其他模型相比具有最优的整体性.同时,在Tanks and Temples数据集上的实验结果显示,该网络与基准模型相比平均F-score提升6.5%.该网络在多视图三维重建领域针对高分辨率场景具有优良的重建效果与泛化能力.

Abstract

To address the current problems that multi-view 3D reconstruction cannot adapt to high-resolution scenes,poor com-pleteness,and ignoring global background information,this paper proposes a 3D reconstruction network MVFSAM-CasMVSNet that fuses deformable convolution with improved self-attention mechanism.Firstly,a deformable convolution module dedicated to the task of multi-view stereo reconstruction is designed to adaptively adjust the range of extracted features and enhance the fea-ture extraction capability for depth mutation.Secondly,considering the correlation of depth information and feature interactions among multiple views,a multi-view fusion self-attention module is designed to aggregate remote context information within each view by linear self-attention with low computational complexity,and capture the depth dependencies between the reference view and the source view by improved multi-head attention.Finally,the cost volume is constructed and regularized from coarse to fine using a multi-stage strategy,and depth map is generated using the cost volume with higher resolution.The test results on DTU dataset show that MVFSAM-CasMVSNet has respectively improved completeness,accuracy,and overall by 15.6%,7.4%,and 11.8%,compared with baseline model,and has optimal overall compared with other existing models.Meanwhile,experimental results on the Tanks and Temples dataset show that the network has an average F-score improvement of 6.5%compared to the benchmark model.The method in this paper has excellent reconstruction effect and generalization ability for high-resolution scenes in the field of multi-view 3D reconstruction.

关键词

三维重建/深度学习/多视图立体/自注意力机制

Key words

3D reconstruction/deep learning/multi-view stereo/self-attention mechanism

引用本文复制引用

出版年

2024
计算机与现代化
江西省计算机学会 江西省计算技术研究所

计算机与现代化

CSTPCD
影响因子:0.472
ISSN:1006-2475
段落导航相关论文