NeRF 3D Reconstruction Method Based on Cone Tracking and Network Decomposition
In computer vision,Neural Radiance Fields(NeRF)define processes that use spatial coordinates or other dimensions,such as time and camera pose,as input and simulate the objective function through a Multi-Layer Perceptron(MLP)network to generate the target scalar(color and depth).NeRF reconstructs 3D scenes well but blurs or distorts different resolutions and trains them slowly.To solve these two issues,this study proposes a NeRF 3D reconstruction method based on cone tracking and network decomposition.First,the cone-tracking method is used to project a cone for each pixel;the projected cone is cut into a series of cones,characterized along the cone,and the blur or artifact effect is reduced by efficiently rendering the anti-aliasing cone.To shorten the training time,the neural network of the original NeRF receiving five-dimensional data is decomposed into two networks using the network decomposition method,which effectively shortens the training time.Experimental results show that the proposed method improves the Peak Signal-to-Noise Ratio(PSNR)by 14.4%-24.6%compared with NeRF,F2-NeRF,and other algorithms in NeRF_Synthetic,LLFF,and Multiresolution datasets.The training time is also reduced,which allows the reconstruction of richer detailed features,better visual effects,and faster training speed.