TEXT-TO-IMAGE BASED ON GENERATIVE ADVERSARIAL NETWORK
In recent years,generative adversarial network(GAN)has achieved remarkable results in text-to-image conversion,but when generating complex images,some important fine-grained information is often lost,including problems such as blurred image edges and unclear local textures.In order to solve the above problems,on the basis of Stack GAN,a deep attention stack GAN(DAS-GAN)is proposed.The first stage of the model generated the basic outline and color of the image,the second stage added and corrected the partial appearance and color,and the last stage refined the texture details of the image.Through the initial scores of experiments on the CUB data set,it is found that DAS-GAN is 0.296 and 0.078 higher than StackGAN++and AttnGAN,which verifies the effectiveness of the model.