Abstract
Data detailed on Machine Translation have been presented. According to news originating from Beijing, People's Republic of China, by NewsRx correspondents, research stated, “Multilingual Neural Machine Translation (MNMT) has recently made great progress in training models that can translate between multiple languages. However, MNMT faces a significant challenge due to the lack of sufficient parallel corpora for all language pairs.” Financial supporters for this research include Chinese National Funding of Social Sciences, National Language Commis-sion Foundation of China. Our news journalists obtained a quote from the research from the Minzu University of China, “Unsupervised machine translation methods, which utilize monolingual data, have emerged as a solution to this challenge. In this paper, we propose a method that leverages cross-lingual encoders, such as XLM-R, in an unsupervised manner (i.e., using monolingual data and bilingual dictionaries) to train a MNMT model. Our method initializes the MNMT model with a pre-trained cross-lingual encoder and employs two levels of alignment to further align the representation space in MNMT model.”