PET/CT Cross-modal medical image fusion of lung tumors based on DCIF-GAN
Medical image fusion based on Generative Adversarial Network(GAN)is one of the research hotspots in the field of computer-aided diagnosis.However,the problems of GAN-based image fusion meth-ods such as unstable training,insufficient ability to extract local and global contextual semantic information of the images,and insufficient interactive fusion.To solve these problems,this paper proposed a dual-coupled interactive fusion GAN(DCIF-GAN).Firstly,a dual generator and dual discriminator GAN was de-signed,the coupling between generators and the coupling between discriminators was realized through the weight sharing mechanism,and the interactive fusion was realized through the global self-attention mecha-nism;secondly,coupled CNN-Transformer feature extraction module and feature reconstruction module were designed,which improved the ability to extract local and global feature information inside the same modal image;thirdly,a cross modal interactive fusion module(CMIFM)was designed,which interactively fuse image feature information of different modalities.In order to verify the effectiveness of the proposed model,the experiment was carried out on the lung tumor PET/CT medical image dataset.Compared with the best method of the other four methods,the proposed method in the average gradient,spatial frequency,structural similarity,standard deviation,peak signal-to-noise ratio,and information entropy are improved by 1.38%,0.39%,29.05%,30.23%,0.18%,4.63%respectively.The model can highlight the informa-tion of the lesion areas,and the fused image has clear structure and rich texture details.
medical imageimage fusionPET/CTcoupled generative adversarial networkswin transformer