Art font style transfer is an intriguing yet challenging task that involves transferring the art style of a source font to a target font through mapping.This study aims to address the limitations of existing methods,particularly their limited robustness in font style migration and poor performance when there is a significant difference between the styles of the source and target fonts.To tackle these challenges,we proposed an end-to-end general network framework model incorporating a self-attention mechanism and adaptive instance normalization to realize artistic style transfer across multiple-text effect domains.The proposed model comprised a generator,two discriminators,and an additional style encoder.To better preserve the font structure and improve network performance,we designed several custom loss functions to optimize the training of a Generative Adversarial Network(GAN).The model was validated using a publicly available art font dataset for the FET-GAN task.In experiments comparing six state-of-the-art methods,the proposed method demonstrated superior performance both quantitatively and qualitatively.Extensive experimental results showed that the model effectively performs font image style migration while maintaining the glyph structure.The Fréchet Inception Distance(FID)of the proposed method is 72.355,which was notably lower than the best comparative result of 91.435,highlighting the method's effectiveness.
关键词
字体风格迁移/自注意力/自适应实例归一化/生成对抗网络/字形约束
Key words
font style transfer/self-attention/adaptive instance normalization/Generative Adversarial Network(GAN)/the glyph constraints