首页|行列式点过程采样的文本生成图像方法

行列式点过程采样的文本生成图像方法

扫码查看
近年来,虽然基于生成对抗网络(generative adversarial networks,GAN)的文本生成图像问题取得了很大的突破,它可以根据文本的语义信息生成相应的图像,但是生成的图像结果通常缺乏具体的纹理细节,并且经常出现模式崩塌、缺乏多样性等问题.针对以上问题,提出一种针对生成对抗网络的行列式点过程(determinant point process for generative adversarial networks,GAN-DPP)方法来提高模型生成样本的质量,并使用StackGAN++、ControlGAN两种基线模型对GAN-DPP进行实现.在训练过程中,该方法使用行列式点过程核矩阵对真实数据和合成数据的多样性进行建模,并通过引入无监督惩罚损失来鼓励生成器生成与真实数据相似的多样性数据,从而提高生成样本的清晰度及多样性,减轻模型崩塌等问题,并且无需增加额外的训练过程.在CUB和Oxford-102数据集上,通过Inception Score、Frechet Incep-tion Distance分数、Human Rank这3种指标的定量评估,证明了 GAN-DPP对生成图像多样性与质量提升的有效性.同时通过定性的可视化比较,证明使用GAN-DPP的模型生成的图像纹理细节更加丰富,多样性显著提高.
Determinant Point Process Sampling Method for Text-to-Image Generation
Objectives:In recent years,a great breakthrough has been made in the text generation image problem based on generative adversarial networks(GAN).It can generate corresponding images based on the semantic information of the text,and has great application value.However,the current generated image results usually lack specific texture details,and often have problems such as collapsed modes and lack of diversity.Methods:This paper proposes a determinant point process for generative adversarial net-works(GAN-DPP)to improve the quality of the generated samples,and uses two baseline models,Stack-GAN++and ControlGAN,to implement GAN-DPP.During the training,it uses determinantal point process kernel to model the diversity of real data and synthetic data and encourages the generator to generate diversity data similar to the real data through penalty loss.It improves the clarity and diversity of generated samples,and reduces problems such as mode collapse.No extra calculations were added during training.Results:This paper compares the generated results through indicators.For the inception score,a high value indicates that the image clarity and diversity have improved.On the Oxford-102 dataset,the score of GAN-DPP-S is increased by 3.1%compared with StackGAN++,and the score of GAN-DPP-C is 3.4%higher than that of ControlGAN.For the CUB dataset,the score of GAN-DPP-S increased by 8.2%,and the score of GAN-DPP-C increased by 1.9%.For the Frechet Inception Distance score,the lower the value,the better the quality of image generation.On the Oxford-102 dataset,the score of GAN-DPP-S is reduced by 11.1%,and the score of GAN-DPP-C is reduced by 11.2%.For the CUB dataset,the score of GAN-DPP-S is reduced by 6.4%,and the score of GAN-DPP-C is reduced by 3.1%.Con-clusions:The qualitative and quantitative comparative experiments prove that the proposed GAN-DPP method improves the performance of the generative confrontation network model.The image texture details generated by the model are more abundant,and the diversity is significantly improved.

generative adversarial networkstext-to-image synthesisdeterminantal point processmode collapsediversity

李晓霖、李刚、张恩琪、顾广华

展开 >

燕山大学信息科学与工程学院,河北 秦皇岛,066004

河北省信息传输与信号处理重点实验室,河北 秦皇岛,066004

生成对抗网络 文本生成图像 行列式点过程 模型崩塌 多样性

国家自然科学基金河北省自然科学基金

62072394F2021203019

2024

武汉大学学报(信息科学版)
武汉大学

武汉大学学报(信息科学版)

CSTPCD北大核心
影响因子:1.072
ISSN:1671-8860
年,卷(期):2024.49(2)
  • 34