Low-Light Enhancement Model in Natural Scenes Based on Generative Adversarial Network
In the most current low-light image enhancement models,both illumination enhancement and original image feature preservation are difficult to achieve and hard to adapt to a variety of different low-light conditions in natural scenes.To address these issues,an improved model based on Generative Adversarial Network(GAN)is proposed.The model first extracts shallow features through normal convolution,thereby constructing a Global-Local Illumination Estimation(GLIE)module with illumination consistency loss.A global-local feature extraction structure is designed inside the GLIE module,and scene-level feature learning and smoothness of lighting enhancement are simultaneously realized through the Swin Transformer and multi-scale dilated convolution.Subsequently,the Original Feature Retention-Block(OFR-Block)is used to splice and fuse the output with shallow features of the lighting learning module.Channel attention is further strengthened to realize the preservation of the original image features and noise suppression.In addition,effective supervision of illumination enhancement and original image feature preservation during model training is achieved through an improved loss function.The experimental results demonstrate that the subjective effect of this model is real and natural,with improved preservation of color texture details of the original image and noise suppression compared with mainstream models such as Retinex-Net and EnlightenGAN.The Natural Image Quality Evaluation(NIQE)and Lightness-Order-Error(LOE)reached 3.88 and 199.4 on test data,respectively,achieving the top three results on different test datasets,with better overall performance.