首页|Cairo University Reports Findings in Machine Translation (Improving neural machine translation for low resource languages through non-parallel corpora: a case study of Egyptian dialect to modern standard Arabic translation)
Cairo University Reports Findings in Machine Translation (Improving neural machine translation for low resource languages through non-parallel corpora: a case study of Egyptian dialect to modern standard Arabic translation)
扫码查看
点击上方二维码区域,可以放大扫码查看
原文链接
NETL
NSTL
New research on Machine Translation is the subject of a report. According to news reporting originating from Cairo, Egypt, by NewsRx correspondents, research stated, “Machine translation for low-resource languages poses significant challenges, primarily due to the limited availability of data. In recent years, unsupervised learning has emerged as a promising approach to overcome this issue by aiming to learn translations between languages without depending on parallel data.” Financial support for this research came from Cairo University. Our news editors obtained a quote from the research from Cairo University, “A wide range of methods have been proposed in the literature to address this complex problem. This paper presents an in-depth investigation of semi-supervised neural machine translation specifically focusing on translating Arabic dialects, particularly Egyptian, to Modern Standard Arabic. The study employs two distinct datasets: one parallel dataset containing aligned sentences in both dialects, and a monolingual dataset where the source dialect is not directly connected to the target language in the training data. Three different translation systems are explored in this study. The first is an attention-based sequence-to-sequence model that benefits from the shared vocabulary between the Egyptian dialect and Modern Arabic to learn word embeddings. The second is an unsupervised transformer model that depends solely on monolingual data, without any parallel data.” According to the news editors, the research concluded: “The third system starts with the parallel dataset for an initial supervised learning phase and then incorporates the monolingual data during the training process.”