首页|基于ChineseBERT的中文知识图谱问答方法

基于ChineseBERT的中文知识图谱问答方法

扫码查看
为了解决中文文字字形繁杂和语义信息复杂导致问答性能不佳的问题,提出了一种名为ChineseBERT-KBQA的方法,利用中文预训练语言模型ChineseBERT作为文本的语义嵌入层且融合了字形和拼音信息,从而提升传统语义解析方法在实体提及识别和关系预测子任务上的性能.具体而言,提出的两个模型分别是基于ChineseBERT-CRF的实体提及识别模型和基于ChineseBERT-TextCNN-Softmax的关系预测模型,这两个模型综合提高了对中文文本的语义理解能力.最后,再结合子任务间的相关信息,进行最终的答案预测,并且使用教育问答数据集MOOC Q&A和开放域问答数据集NLPCC2018进行实验,实验结果表明该方法在中文知识图谱问答中的有效性和准确性.
Chinese Knowledge Graph Question Answering Method Based on ChineseBERT
In order to solve the problem of poor question-answering performance caused by complex Chinese characters and semantic information,a method called Chinesebert-KBQA was proposed.Chinese pre-trained language model ChineseBERT was used as the semantic embedding layer of the text and the font and pinyin information were integrated.It improves the performance of traditional semantic parsing methods in the sub-tasks of entity reference recognition and relationship prediction.Specifically,the two models proposed were ChineseBERT-CRF based entity reference recognition model and ChineseBERT-TextCNN-Softmax based relationship prediction model,which comprehensively improved the semantic understanding ability of Chinese text.Finally,combined with the relevant information among the sub-tasks,the final answer prediction was carried out.The MOOC Q&A and NLPCC2018,both educational question-answering data sets,were used for experiments.The experimental results show the effectiveness and accuracy of the proposed method in Chinese knowledge graph question-answering.

knowledge graphBERTintelligent question answeringdeep learningpretraining language models

邓健志、方雨桐、杨燕

展开 >

桂林理工大学地球科学学院,桂林 541004

桂林理工大学,广西嵌入式技术与智能系统重点实验室,桂林 541004

桂林理工大学物理与电子信息工程学院,桂林 541004

知识图谱 BERT 智能问答 深度学习 预训练语言模型

国家自然科学基金

81660031

2024

科学技术与工程
中国技术经济学会

科学技术与工程

CSTPCD北大核心
影响因子:0.338
ISSN:1671-1815
年,卷(期):2024.24(23)