A dilated convolution parallel attention mechanism and texture contrast enhancement for infrared and visible image fusion
In the realm of machine vision,the fusion of infrared and visible images for video surveillance enhances the ability of machines to recognize targets and environments with greater effectiveness.Aiming at the problems of insufficient character detail extraction and blurred target object contours in existing infrared and visible light image algorithms for video surveillance,a parallel attention mechanism with dilated convolution and texture contrast enhancement for infrared and visible image fusion are proposed.Firstly,the fusion network uses multi-scale dense connection and dilated convolution parallel attention mechanism to extract gradient and intensity information from the images.Then,the texture contrast-enhanced network is constructed using Scharr filters and depthwise separable convolutions to enhance the contrast and texture details of fused features.Finally,the decomposition network is designed with an information exchange flow network.Since the quality of the decomposed images depends directly on the fusion result,the decomposition process can enable the fused image to contain more scene information.Compared with other eight representative image fusion methods,the seven objective evaluation indexes of this paper's method have an improvement of 5%~62%,which indicates that the proposed method not only can fully extract source image information and obtain fusion results with clearer texture details and better contrast,but also effectively solves the problem of multispectral remote sensing analysis,military reconnaissance,and other practical applications hindered by the large difference in resolution between source images.