Self-Supervised Learning for Spatial-Domain Light-Field Super-Resolution Imaging
This paper proposes a self-supervised learning-based method for the super-resolution imaging of spatial-domain resolution-limited light-field images.Using deep learning self-encoding,a super-resolution reconstruction of the spatial-domain is performed simultaneously for all light field sub-aperture images.A hybrid loss function based on multi-scale feature structure and total variation regularization is designed to constrain the similarity of the model output image to the original low-resolution image.Numerical experiments show that the newly proposed method has a suppressive effect on noise,and the resultant average super-resolutions for different light field imaging datasets exceed those of the supervised learning-based method for light field spatial domain images.