3D reconstruction based on neural radiance field structure ambiguity optimization
Few view 3D reconstruction requires only fewer views to recover the 3D geometry of an object or scene.However,the lack of sufficient information to accurately restore the 3D scene due to the under-coverage on different viewpoints in the fewer views can lead to inaccurate or blurred reconstruction results.In this paper,a neural radiation field-based framework is proposed to solve the problem of structural blurring by utilizing accurate cost costumers to correlate foreground and back view depth information.First,the local features of the foreground and the backscene are extracted using a pyramid network to enhance the capture of the scene details,and the self-attention mechanism is introduced to ensure that the key regions are attended to during the feature extraction process.Then,the smooth transfer of feature scales is realized by the adaptive sensory field module as a way to construct an accurate feature cost volume.Finally,random structural similarity loss is introduced to replace pixel-by-pixel supervision by utilizing local area pixels as a whole supervision to capture the structural information in the scene more comprehensively.The experimental results show that in the DTU dataset,PSNR and LPIPS can achieve optimal results and SSIM can achieve suboptimal results compared with the comparison methods,and PSNR,LPIPS and SSIM are improved by 0.478,0.001 and 0.01,respectively.Compared with the baseline model,experiments on the DTU dataset,the LLFF dataset and the NeRF dataset show that the method proposed in this paper can effectively solve the structural ambiguity problem caused by insufficient information in view less 3D reconstruction.
3D reconstructionneural radiance fieldcost volumestructural ambiguity