Infrared and visible light image fusion based on convolution and self attention
As convolution operation pays too much attention to local features of an image,which easily cause the loss of the global semantic information of the fused image when fusing source images.To solve this problem,an infrared and visible light image fusion model based on convolution and self attention is proposed in this paper.In the proposed model,convolution module is adopted to extract local features of image,and self attention is adopted to extract global features.In addition,since the simple operation cannot handle the fusion of features at different levels,the embedded block residual fusion module is proposed to realize the multi-layer feature fusion.Experimental results demonstrate that the proposed method has superiority over the unsupervised deep fusion algorithms in both subjective evaluation and six objective metrics,among which the mutual information,standard deviation,and visual fidelity are improved by 61.33%,9.96%,and 19.46%,respectively.