Virtual Try-on Network by Reconstructing Human Semantic Segmentation
In recent years,virtual try-on technology has begun to play an important role in online shopping,and its commercial value is also widely concerned.Image-based virtual try-on is designed to accurately transform the target clothing and synthesize it onto the target image.The key to realize the accurate deformation of clothes is how to determine the deformation position and deformation degree of clothes image.To solve this problem,this paper proposes a virtual try-on network based on human semantic segmentation and reconstruction.Before the virtual try-on,the target clothing is firstly prealigned to the corresponding position of the target human body by affine transformation.Then,the semantic segmentation data of the target body is reconstructed to predict the semantic distribution after the successful virtual try-on.Finally,the deformation parameters are calculated according to the new semantic segmentation,and the deformed clothing image is synthesized to the target body in a way that adaptively fills the missing pose information.Experiments show that this method can achieve better and more realistic results of virtual try-on.