首页|基于空间注意力及条件增强的文本生成图像方法

基于空间注意力及条件增强的文本生成图像方法

扫码查看
针对文本生成图像语义不一致、训练不稳定、生成图像单一等问题,在一种简单有效的文本生成图像基准模型上提出基于空间注意力及条件增强的文本生成图像模型.为提高训练过程的稳定性、增加生成图像的多样性,在原有模型基础上增加条件增强模型;从文本分布出发拟合图像分布,增加视觉特征的多样性,扩大表现空间,在原有的DF-Block模块中增加一层Affine仿射块.在判别器中加入空间注意力模型,提高文本与合成图像的语义一致性.试验结果表明,在CUB和Oxford-102数据集上,初始得分分别提高了 2.05%和 2.63%;在CUB和COCO数据集上,特征空间距离分别降低了 20.73%和 9.25%.本研究提出的模型生成的图像更具多样性且更接近真实图像.
Text-to-image synthesis method based on spatial attention and conditional augmentation
For the problems such as inconsistent semantics of text-to-images,unstable training,and single generated images,a text-to-images model based on spatial attention and conditional augmentation was proposed on a simple and effective text-to-images benchmark model.To improve the stability of the training process and increase the diversity of generated images,a conditional augmentation model was added on the basis of the original model;starting from the text distribution to fit the image distribution,increasing the diversity of visual features and expanding the performance space,and adding an Affine block in the original DF-Block module.A spatial attention model was added to the discriminator to improve the semantic consistency of the text and the synthetic image.The experimental results showed that on the CUB and Oxford-102 datasets,inception score increased by 2.05%and 2.63%respectively;and on the CUB and COCO datasets,Fréchrt inception distance decreased by 20.73%and 9.25%respectively.The results proved that the images generated by the proposed model were more diverse and closer to real images.

text-to-imagesDF-GANconditional augmentation modelAffine blockspatial attention model

马军、车进、贺愉婷、马鹏森

展开 >

宁夏大学电子与电气工程学院,宁夏 银川 750021

宁夏沙漠信息智能感知重点实验室,宁夏 银川 750021

文本生成图像 DF-GAN 条件增强模型 Affine仿射块 空间注意力模型

2024

山东大学学报(工学版)
山东大学

山东大学学报(工学版)

CSTPCD北大核心
影响因子:0.634
ISSN:1672-3961
年,卷(期):2024.54(6)