Low-Light Image Enhancement via Cross-Domain Feature Fusion
To address the decline in the image quality under low-light conditions,low-light image enhancement methods aim to improve the visible details such as brightness and color richness of degraded images to produce clearer images that align more closely with human visual expectations.Although remarkable progress has been made in deep learning-based enhancement methods,traditional convolutional neural networks have limitations in terms of feature extraction due to locality,rendering the effective modeling of long-distance relationships between image pixels challenging for the network.In contrast,the Transformer model utilizes the self-attention mechanism to better capture long-range dependencies between pixels.However,existing research reveals that global self-attention mechanisms can lead to a lack of spatial locality in networks,thereby deteriorating the processing ability of transformer-based networks for local feature details.Therefore,in this study,a novel low-light image enhancement network,MFF-Net,is proposed.The principle of cross-domain feature fusion is adopted to integrate the advantages of convolutional neural network and Transformer to obtain cross-domain feature representations containing multiscale and multidimensional information.In addition,to maintain feature semantic consistency,a feature semantic transformation module is specially designed.Experimental results on public low-light datasets show that the proposed MFF-Net achieves better enhancement effects than mainstream methods,with the generated images exhibiting better visual quality.