针对现有的基于姿势引导下的虚拟试穿方法存在着服装纹理过度变形、生成的图像出现遮挡等问题,提出了一种基于姿势引导下的虚拟试穿网络(pose-guided virtual try-on network,PG-VTON).首先,使用基于U-Net的网络对人物图像进行姿势转换,生成目标姿势下的人物解析图,引入信息增强模块提高解析图的准确性,减少试穿中的错误遮挡;然后,通过薄板样条变换(thin plate spline,TPS)方法将目标服装变形为与人物身体相贴合的形状,引入网格变形约束项来保留目标服装的纹理和细节特征;最后,将解析图和变形服装结合起来,生成最终的虚拟试穿图像.实验结果表明,该方法不仅比Downto(down to the last detail)网络试穿图像的平均结构相似性(structural similarity,SSIM)提高了 2.83%,初始得分(inception score,IS)提高了 6.74%,与其他最新的虚拟试穿方法相比减少了试穿过程中的错误遮挡,能生成更加清晰真实的结果.
Virtual try-on network based on pose guidance
Existing pose guidance-based virtual try-on methods suffer from excessive de-formation of clothing texture and occlusion of generated images.To address these issues,this paper proposes a pose-guided virtual try-on network(PG-VTON),an improved vir-tual try-on network based on the Downto network.First,a U-Net-based network is used to transform the pose of a figure image and generate the figure parsing image under the target pose.An information-enhancement module is introduced to improve the accuracy of the parsing image.Then,a thin plate spline(TPS)is used to transform the target cloth into a shape that fits the body of the target figure,and grid warping regularization is introduced to preserve the texture and detail features of the target cloth.The final virtual try-on image is generated by combining the parsing image with a warped cloth.Experimental results indicate that the proposed method improves the average structural similarity(SSIM)of the virtual try-on image by 2.83%and inception score(IS)by 6.74%compared with the Downto network.Further,as compared to other virtual try-on methods,the proposed method reduces false occlusion in the process and generates clearer and more realistic results.