Image synthesis method based on multiple text description
Aiming at the challenges associates with the low quality and structural errors existed in the images gener-ated by a single text description,a multi-stage generative adversarial network model was used to study,and it was pro-posed to interpolate different text sequences to enrich the given text descriptions by extracting features from multiple text descriptions and imparting greater detail to the generated images.In order to enhance the correlation between the generated images and the corresponding text,a multi-captions deep attentional multi-modal similarity model that cap-tured attention features was introduced.These features were subsequently integrated with visual features from the pre-ceding layer,serving as input for the subsequent layer.This integration improved the realism of the generated images and enhanced their semantic consistency with the text descriptions.In addition,a self-attention mechanism to enable the model to effectively coordinate the details at each position was incorporated,resulting in images that were more aligned with real-world scenarios.The optimized model was verified on the CUB and MS-COCO datasets,demon-strating the generation of images with intact structures,stronger semantic consistency,and richer visual diversity.