Dual Channel Text Relation Extraction Based on Cross Attention
[Objective]This paper constructs a dual-channel text relation extraction model based on cross-attention to address the partial text feature issues of the existing models.The new model aims to enhance the comprehensiveness and accuracy of text relation extraction,achieving high-performance relation extraction in domain-specific datasets.[Methods]We proposed a Dual Channel Textual Relation Extraction Based on Cross Attention relation extraction model DCCAM(Dual Channel Cross Attention Model),designing a dual-channel structure that integrated sequence and graph channels.Then,we constructed a cross-attention mechanism of self-attention and gated-attention to promote the high fusion of text features and deeply examine the potential associative information.Finally,we conducted experiments on public datasets and two constructed policing datasets.[Results]Experimental results on the NYT and WebNLG public datasets showed that the DCCAM model's F1 values improved by 3%and 4%compared to the baseline model.Additionally,ablation experiments proved the effectiveness of each module in enhancing text extraction capability.Experimental results on the telecom fraud category dataset and the aiding cybercrime dataset in the police domain showed that the DCCAM model can improve the text relation extraction effectiveness in the police domain,with F1 values improving by 8.8%and 11.8%compared with the baseline model.[Limitations]We did not use large language models to explore text relation extraction techniques.[Conclusions]The DCCAM model can significantly improve the ability of text relationship extraction,demonstrating the effectiveness and practicality of text relation extraction tasks in the policing domain,and can provide text association analysis and guidance for police work.
Text Relation ExtractionDual Channel MechanismsCross Attention