Infrared and Visible Image Fusion Network with Multi-Relation Perception
A multi-relation perception network for infrared and visible image fusion is proposed in this paper to fully integrate consistent features and complementary features between infrared and visible images.First,a dual-branch encoder module is used to extract features from the source images.The extracted features are then fed into the fusion strategy module based on multi-relation perception.Finally,a decoder module is used to reconstruct the fused features and generate the final fused image.In this fusion strategy module,the feature relationship perception and the weight relationship perception are constructed by exploring the interactions between the shared relationship,the differential relationship,and the cumulative relationship across different modalities,so as to integrate consistent features and complementary features between different modalities and obtain fused features.To constrain network training and preserve the intrinsic features of the source images,a wavelet transform-based loss function is developed to assist in preserving low-frequency components and high-frequency components of the source images during the fusion process.Experiments indicate that,compared to the state-of-the-art deep learning-based image fusion methods,the proposed method can fully integrate consistent features and complementary features between source images,thereby successfully preserving the background information of visible images and the thermal targets of infrared images.Overall,the fusion performance of the proposed method surpasses that of the compared methods.