An infrared and visible image fusion method based on deep convolutional feature extraction
Most current infrared and visible image fusion approaches require decomposition of the source images during the fusion process,which will lead to blurred details and loss of saliency targets.In order to solve this problem,an infrared and visible image fusion method with deep convolutional feature extraction is proposed.Firstly,the feature extraction capability of EfficientNet is analysed by using transfer learning to select seven feature extraction modules.Secondly,the source image is fed directly into the feature extraction module to achieve salient feature extraction.After that,the channel normalization and average operator are constructed for obtaining the activity level map.The fusion rule with a combination of Softmax and Up-sampling is used to obtain the fused weights,which are then convolved with the source image to produce seven candidate fused images.Finally,the pixel maximum of the candidate fused images are used as the final reconstructed fused results.Experiments are based on public datasets,and compared with classical traditional and deep learning methods.The subjective and objective results show that the proposed method can effectively integrate the significant information of infrared and visible images and enhance the detail texture of the fused images,providing better visual effects with less image artefacts and artificial noise.