Recent Progress on Machine Translation Based on Pre-trained Language Models
Natural language processing(NLP)involves many important topics,one of which is machine translation(MT).Pre-trained language models(PLMs),such as BERT and GPT,are state-of-the-art approaches for various NLP tasks including MT.Therefore,many researchers use PLMs to solve MT problems.To push the research forward,this paper provides an overview of recent advances in this field,including the main research questions and solutions based on various PLMs.We compare the motiva-tions,commonalities,differences and limitations of these solutions,and summarise the datasets commonly used to train such MT models,as well as the metrics used to evaluate them.Finally,further research directions are discussed.
Natural language processingMachine translationPre-trained language modelBERTGPT