Single-sample Image Translation Based on Multi-scale Scale-Unet
Single-sample unsupervised image-to-image translation(UI2I)has made significant progress with the development of generative adversarial networks(GANs).However,previous methods cannot capture complex textures in images and preserve original content information.We propose a novel one-shot image translation structure SUGAN based on a scale-variable U-Net structure(Scale—Unet).The proposed SUGAN uses Scale—Unet as a generator to continuously improve the network structure using multi-scale structures and progressive methods to learn image features from coarse to fine.Meanwhile,we propose the scale-pixel loss to better constrain the preservation of original content information and prevent information loss.Experiments show that compared with SinGAN,TuiGAN,TSIT,StyTR2 and another methods on public datasets Summer↔ Winter,Horse↔Zebra,the SIFID value of the generated image is reduced by 30%.The proposed method can better preserve the content information of the image while generating detailed and realistic high-quality images.
single-sample image translationScale-Unetmulti-scale structureprogressive approachscale-pixel loss