The cross-modal medical image synthesis network based on deep convolution has the advantage of learning nonlinear mapping relationships from large-scale data resources to perform local generation.However,the existing methods overlook the inherent feature self-similarity of medical images and only extract pixel-level feature information through convolution,which results in insufficient deep feature extraction capability and inadequate semantic information representation.Therefore,a Generative Adversarial Network(Graph Attention Block and Global Patch Attention Block Generative Adversarial Networks,GGPA-GAN)is proposed based on Graph Attention Block(GAB)and Global Patch Attention Block(GPAB).GAB and GPAB are utilized to capture the self-similarity between and within slices of medical images,which enable deep feature extraction.Additionally,2D positional encoding in the generator is incorporated by using spatial position information of the images to enhance the expression capability of semantic information.The experimental results on the HCP_S1200 dataset and ADNI dataset demonstrate that the proposed network achieves superior performance compared to other networks in synthesizing brain MRI images across 3T-7T and T1-T2 modalities.In the 3T-7T brain MRI image synthesis task,the method outperforms the Pix2pix synthesis method with improvements of 0.55 in Peak Signal-to-Noise Ratio(PSNR),0.007 in Structural Similarity Index(SSIM),and 6.55 in Mean Absolute Error(MAE).For the T1-T2 brain MRI image synthesis task,the method surpasses the Pix2pix method with improvements of 0.68 in PSNR,0.006 in SSIM,and 8.77 in MAE.These results fully prove the effectiveness of the proposed method and provide powerful help for clinical diagnosis.
brain magnetic resonance imagingdeep learningmedical image synthesisgraph attentionpositional encoding