首页|基于DCIF-GAN的肺部肿瘤PET/CT跨模态医学图像融合

基于DCIF-GAN的肺部肿瘤PET/CT跨模态医学图像融合

扫码查看
基于生成对抗网络(Generative Adversarial Network,GAN)的医学图像融合是计算机辅助诊断领域的研究热点之一,但是现有基于GAN的融合方法存在训练不稳定,提取图像的局部和全局上下文语义信息能力不足,交互融合程度不够等问题.针对上述问题,本文提出了双耦合交互式融合GAN(Dual-Coupled Interactive Fusion GAN,DCIF-GAN).首先,设计了双生成器双鉴别器GAN,通过权值共享机制实现生成器之间和鉴别器之间的耦合,通过全局自注意力机制实现交互式融合;第二,设计耦合CNN-Transformer的特征提取模块(Coupled CNN-Transformer Feature Ex-traction Module,CC-TFEM)和特征重构模块(CNN-Transformer Feature Reconstruction Module,C-TFRM),提升了对同一模态图像内部的局部和全局特征信息提取能力;第三,设计跨模态交互式融合模块(Cross Model Intermodal Fusion Module,CMIFM),通过跨模态自注意力机制,进一步整合不同模态间的全局交互信息.为了验证本文模型的有效性,在肺部肿瘤PET/CT医学图像数据集上进行实验,该文方法在平均梯度,空间频率,结构相似度,标准差,峰值信噪比,信息熵等上与其他四种方法中最优方法相比,分别提高了1.38%,0.39%,29.05%,30.23%,0.18%,4.63%.模型能够突出病变区域信息,融合图像结构清晰且纹理细节丰富.
PET/CT Cross-modal medical image fusion of lung tumors based on DCIF-GAN
Medical image fusion based on Generative Adversarial Network(GAN)is one of the research hotspots in the field of computer-aided diagnosis.However,the problems of GAN-based image fusion meth-ods such as unstable training,insufficient ability to extract local and global contextual semantic information of the images,and insufficient interactive fusion.To solve these problems,this paper proposed a dual-coupled interactive fusion GAN(DCIF-GAN).Firstly,a dual generator and dual discriminator GAN was de-signed,the coupling between generators and the coupling between discriminators was realized through the weight sharing mechanism,and the interactive fusion was realized through the global self-attention mecha-nism;secondly,coupled CNN-Transformer feature extraction module and feature reconstruction module were designed,which improved the ability to extract local and global feature information inside the same modal image;thirdly,a cross modal interactive fusion module(CMIFM)was designed,which interactively fuse image feature information of different modalities.In order to verify the effectiveness of the proposed model,the experiment was carried out on the lung tumor PET/CT medical image dataset.Compared with the best method of the other four methods,the proposed method in the average gradient,spatial frequency,structural similarity,standard deviation,peak signal-to-noise ratio,and information entropy are improved by 1.38%,0.39%,29.05%,30.23%,0.18%,4.63%respectively.The model can highlight the informa-tion of the lesion areas,and the fused image has clear structure and rich texture details.

medical imageimage fusionPET/CTcoupled generative adversarial networkswin transformer

周涛、程倩茹、张祥祥、李琦、陆惠玲

展开 >

北方民族大学 计算机科学与工程学院,宁夏 银川 750021

北方民族大学 图像图形智能处理国家民委重点实验室,宁夏 银川 750021

宁夏医科大学 医学信息工程学院,宁夏 银川 750004

医学图像 图像融合 PET/CT 耦合生成对抗网络 Swin Transformer

宁夏自然科学基金资助项目北方民族大学2022年研究生创新项目资助

2022AAC03149YCX22190

2024

光学精密工程
中国科学院长春光学精密机械与物理研究所 中国仪器仪表学会

光学精密工程

CSTPCD北大核心
影响因子:2.059
ISSN:1004-924X
年,卷(期):2024.32(2)
  • 21