Semi-supervised Text Style Transfer Method Based on Multi-reward Reinforcement Learning
Text style transfer is an important task in natural language processing that aims to change the stylistic attributes of text while preserving necessary semantic information.However,in many tasks where large-scale parallel corpora are lacking,existing unsupervised methods suffer from issues such as insufficient text diversity and poor semantic consistency.To address these problems,this paper proposes a semi-supervised multi-stage training framework.It first constructs a pseudo-parallel corpus using a style labeling model and a masked language model to guide the model to learn diverse transfer styles in a supervised man-ner.Then,adversarial similarity reward,Mis reward,and style reward are designed to conduct reinforcement learning on unlabeled data to enhance the model's semantic consistency,logical consistency,and accuracy of style transfer.In the sentiment polarity conversion task based on the YELP dataset,the proposed method's BLEURT score increases by 3.1%,the Mis score increases by 2.5%,and the BLEU score increases by 9.5%.In the formal style conversion experiment based on the GYAFC dataset,its BLEURT score increases by 6.2%,and the BLEU score increases by 3%.
Text generationText style transferMulti-stage trainingStyle labeling modelReinforcement learning