A free viewpoint synthesis method based on differentiable rendering
To address the challenges posed by highly variable lighting conditions and camera parameters in uncontrolled environments affecting free viewpoint synthesis,an approximate differentiable deferred inverse rendering pipeline(ADDIRP)was proposed.This pipeline incorporated a physics-based camera model to accurately simulate the optical imaging process of the camera.Firstly,we proposed creating photometric and geometric camera models based on the input images and corresponding poses.The photometric camera model was represented by learnable parameters such as exposure and white balance,while the geometric camera model was represented by learnable intrinsic and extrinsic parameters.Next,the components of the pipeline were optimized using image space loss between the rendered and target images,enhancing the robustness of the inverse rendering pipeline to complex lighting and roughly captured images.Finally,our approach generated 3D content reconstructions compatible with traditional graphics engines.Experimental results demonstrated that the ADDIRP outperformed existing methods on real-world datasets,achieving superior visual perception consistency on synthetic datasets while maintaining comparable synthesis quality.