Examining Dialogue Consistency Based on Chapter-Level Semantic Graph
[Objective]This paper integrates chapter-level semantic graphs to improve the accuracy of dialogue consistency detection.[Methods]First,we used the pre-trained language model BERT to encode the dialogue context and knowledge base.Then,we constructed a dialogue chapter-level semantic graph containing coreference chains and abstract meaning representations.Third,we captured the semantic information of the constructed graph using a multi-relation graph convolutional network.Finally,we built multiple classifiers to predict dialogue inconsistency.[Results]We examined our new model on the CI-ToD benchmark dataset and compared its performance with the existing models.The proposed model's F1 value improved by more than 1%over the optimal models.[Limitations]The proposed model cannot address the co-referential entity omission in dialogues.[Conclusions]Integrating various types of semantic information,such as coreference chains and abstract meaning representations,can effectively improve the performance of dialogue consistency detection.
Dialogue SystemConsistency DetectionCoreference ChainAbstract Meaning RepresentationGraph Convolutional Network