Infrared and Visible Image Fusion Based on Attention Mechanism and Illumination-Aware Network
Some image fusion methods do not fully consider the illumination conditions in the image environment,resulting in insufficient brightness of infrared targets and overall low brightness of the image in the fused image,thereby affecting the clarity of texture details.To address these issues,an infrared and visible image fusion algorithm based on attention mechanism and illumination-aware network was proposed.Firstly,before training the fusion network,the illumination-aware network was used to calculate the probability that the current scene was daytime or nighttime and apply it to the loss function of the fusion network,so as to guide the training of the fusion network.Then,in the feature extraction part of the network,spatial attention mechanism and depthwise separable convolution were used to extract features from the source image.After obtaining spatial salient information,it was input into a convolutional neural network(CNN)to extract deep features.Finally,the deep feature information was concatenated for image reconstruction to obtain the final fused image.The experimental results show that the method proposed in this paper improves mutual information(MI),visual fidelity(VIF),average gradient(AG),fusion quality(Qabf),and spatial frequency(SF)by an average of 39.33%,11.29%,26.27%,47.11%,and 39.01%,respectively.At the same time,it can effectively preserve the brightness of infrared targets in the fused images,including rich texture detail information.