首页|融合XLnet与DMGAN的文本生成图像方法

融合XLnet与DMGAN的文本生成图像方法

扫码查看
针对文本生成图像任务中的文本编码器不能深度挖掘文本信息,导致后续生成的图像存在语义不一致的问题,本文提出了一种改进DMGAN模型的文本生成图像方法.首先使用XLnet的预训练模型对文本进行编码,该模型在大规模语料库的预训练之下能够捕获大量文本的先验知识,实现对上下文信息的深度挖掘;然后在DMGAN模型生成图像的初始阶段和图像细化阶段均加入通道注意力模块,突出重要的特征通道,进一步提升生成图像的语义一致性和空间布局合理性,以及模型的收敛速度和稳定性.实验结果表明,所提出模型在CUB数据集上生成的图像相比原DMGAN模型,IS指标提升了0.47,FID指标降低了2.78,充分说明该模型具有更好的跨模态生成能力.
Text-to-image method based on XLnet and DMGAN
In order to solve the problem that the text encoder cannot dig the text information deeply in the task of text image generation,which leads to the semantic inconsistency of the subsequent generated images,a text image generation method is proposed based on improved DMGAN model.Firstly,XLnet's pre-training model is used to encode the text.This model can capture a large number of prior knowledge of the text under the pre-training of large-scale corpus,and realize the deep mining of context information.Then,the channel attention module is added to initial stage of image generation by DMGAN model and the image refinement stage to highlight important feature channels,and further improve the semantic consistency and spatial layout rationality of the generated images,as well as the convergence speed and stability of the model.Experimental results show that in comparison with original DMGAN model,the image on CUB dataset generated by the proposed model has a 0.47 increase in the IS index and a 2.78 decrease in the FID in dex,which fully indicates that the model has better cross-mode generation ability.

text-to-imageXLnet modelgenerate adversarial networksattention of channel

赵泽纬、车进、吕文涵

展开 >

宁夏大学 物理与电子电气工程学院, 宁夏 银川 750021

文本生成图像 XLnet模型 生成对抗网络 通道注意力

国家自然科学基金

61861037

2024

液晶与显示
中科院长春光学精密机械与物理研究所 中国光学光电子行业协会液晶分会 中国物理学会液晶分会

液晶与显示

CSTPCD北大核心
影响因子:0.964
ISSN:1007-2780
年,卷(期):2024.39(2)
  • 1