3D Reconstruction Method Fusing Implicit Rendering and Explicit Modeling
Aiming at the challenges of texture loss and large-area voids in multi-view 3D reconstruction,a 3D reconstruction method combining neural implicit and explicit modeling is proposed.First,input multiple views,recover camera parameters with an incremental motion reconstruction algorithm,and generate precise sparse point clouds;then,predict volume rendering density and RGB color with a deep fully-connected network fused with self-attention mechanism;then,hierarchically sample light samples Point to solve its volume rendering integral,build a loss function based on the integral result for parameter optimization,generate a 3D implicit expression,and store it in the neural network;finally,use the explicit reconstruction isosurface extraction algorithm to achieve 3D reconstruction.The DTU data set is used for experimental verification.The results show that in the DTU data sets Scan16 and Scan19,the average overall accuracy of the method reaches 0.403mm.Compared with the classic explicit reconstruction model,the built model has smaller holes and more prominent details.Real 3D and virtual reality have certain reference value.
3D reconstructionmulti-view stereoexplicit reconstructionimplicit renderingfully connected network