Infrared and visible image fusion algorithm based on progressive difference-aware attention
The fusion of infrared images and visible images is an important research direction in image fusion.Different image sources can provide complementary knowledge.The fused image contains more information,which leads to better recognition and analysis performance.Currently,there are two main approaches:traditional methods and deep learning-based methods.This paper proposes a new progressive cross-modal difference-aware image fusion network based on existing methods and establishes an end-to-end visible-infrared image fusion model.The model adopts a CNN-based framework as the backbone,consisting of a progressive feature extractor and an image reconstructor.Firstly,the algorithm establishes two feature extraction branches for visible and infrared images,respectively,and introduces a differential aware attention module(DAAM)between the two branches.This module enables the network to gradually integrate complementary information in the feature extraction stage.Therefore,the feature extractor can fully extract common and complementary features from both infrared and visible images.Then,the extracted deep features are fused through an intermediate fusion strategy,combining the features of visible and infrared images to obtain the best possible fusion result.The fused image is then reconstructed using an image reconstructor.Finally,the performance of the proposed method is tested by comparing it with other relevant methods,and the experimental results show that the proposed method can effectively improve the fusion effect.