首页|特征对齐与上下文引导的多视图三维重建

特征对齐与上下文引导的多视图三维重建

扫码查看
针对三维重建对细小特征及边缘区域重建欠佳的问题,提出了一个基于特征对齐与上下文引导的多视图三维重建网络,即AGA-MVSNet.首先,构建了一个特征对齐模块(FA)与特征选择模块(FS),能够将特征金字塔不同层级的特征先对齐之后再进行融合,提高对小尺寸物体和边缘区域的特征提取能力;然后,在代价体正则化中加入了一个上下文引导模块,该模块能够在略微增加运行内存的情况下充分利用周围信息,增强成本体积之间的相关性,提高三维重建的精度与完整度;最后,在 DTU 数据集上进行了实验,实验结果表明,该方法相比于基准网络 CasMVSNet 精度提升了 2.2%,整体重建质量提升了 2.5%.此外,在Tanks and Temples数据集上的表现相较一些已知的方法也十分优异,且在BlendedMVS数据集上也生成了不错的点云效果.
Multi-view stereo network reconstruction with feature alignment and context-guided
To address the problem of poor reconstruction of small features and edge areas in 3D reconstruction,a multi-view 3D reconstruction network based on feature alignment and context-guided was proposed,namely AGA-MVSNet(alignment and context guidance MVSNet).First,a feature alignment module(FA)and a feature selection module(FS)were constructed to combine different levels of the feature pyramid.The features were first aligned and then fused to enhance the feature extraction capabilities of small-sized objects and edge areas.Subsequently,a context guidance module was incorporated into the cost volume regularization to fully utilize surrounding information and solve the problem of poor correlation between cost volumes,thereby improving the accuracy and completeness of three-dimensional reconstruction,with only a slight increase in memory consumption.Finally,experiments were conducted on the DTU dataset.Experimental results demonstrated that the proposed method improved the accuracy by 2.2%and the overall reconstruction quality by 2.5%compared with the benchmark network CasMVSNet.In addition,the performance on the Tanks and Temples dataset was also excellent compared with some known methods,and good point cloud effects were also generated on the BlendedMVS dataset.

deep learningmulti-view 3D reconstructionfeature alignmentcontext guidance3D attention mechanism

熊超、王云艳、罗雨浩

展开 >

湖北工业大学电气与电子工程学院,湖北 武汉 430068

襄阳湖北工业大学产业研究院,湖北 襄阳 441100

深度学习 多视图三维重建 特征对齐 上下文引导 3D注意力机制

国家自然科学基金项目

41601394

2024

图学学报
中国图学学会

图学学报

CSTPCD北大核心
影响因子:0.73
ISSN:2095-302X
年,卷(期):2024.45(5)