Task-Oriented Dialogue Understanding with Explicit Knowledge Injection
Dialogue understanding aims to detect user intent given dialogue history.Due to the lack of domain knowledge,traditional dialogue understanding models fail to understand domain-specific entities.Knowledge-enhanced approaches are proposed to improve model performance with structured knowledge,where the knowledge is implicitly injected with knowledge embeddings.However,knowledge embeddings have to be updated with the update of the knowledge base,which brings extra costs.Besides,existing methods suffer from the knowledge noise and incorporate the context-irrelevant knowledge that changes the semantics of the utterance.To address the above issues,this paper proposes a multi-task learning dialogue understanding model with explicit knowledge injection(K-CAM).K-CAM injects knowledge into the model using natural language knowledge without retraining the model for updated knowledge embeddings.A multi-task learning objective of joint intent detection,slot filling,and relevant knowledge recognition is further proposed to resist the knowledge noise problem.Extensive experimental results show that the proposed model K-CAM achieves a significant improvement of 4.87%and 2.09%in macro F1 on the intent detection and slot filling tasks compared to other baselines.