In order to solve the problem that the text encoder cannot dig the text information deeply in the task of text image generation,which leads to the semantic inconsistency of the subsequent generated images,a text image generation method is proposed based on improved DMGAN model.Firstly,XLnet's pre-training model is used to encode the text.This model can capture a large number of prior knowledge of the text under the pre-training of large-scale corpus,and realize the deep mining of context information.Then,the channel attention module is added to initial stage of image generation by DMGAN model and the image refinement stage to highlight important feature channels,and further improve the semantic consistency and spatial layout rationality of the generated images,as well as the convergence speed and stability of the model.Experimental results show that in comparison with original DMGAN model,the image on CUB dataset generated by the proposed model has a 0.47 increase in the IS index and a 2.78 decrease in the FID in dex,which fully indicates that the model has better cross-mode generation ability.
text-to-imageXLnet modelgenerate adversarial networksattention of channel