Mongolian-Chinese Neural Machine Translation at Text Level Based on Parameter Sharing
Aiming at the lack of effective use of text context in traditional Mongolian-Chinese neural machine translation,a chapter-level Mongolian-Chinese neural machine translation model based on the Transformer model is constructed.The encoder uses a relative attention mechanism to retrieve global context information for multiple sentences,and the decoder uses a cache-based method re-cords the relevant information of the translated sentence,and uses the cached sentence information as the text context information in the process of predicting the current sentence.At the same time,the grouping strategy is used to share the parameters between layers,reduces the amount of parame-ters in the model,and improves the utilization of corpus as much as possible in the limited memory.The experimental results show that our experiment is 8.7 higher than the sentence-level Transformer on BLEU4,and 2.49 higher than the chapter-level machine translation model without parameter sharing on BLEU4.