Purpose/Significance The relative positions of causality words are utilized to assist deep learning models to improve cau-sality prediction and mine medical text gain information.Method/Process The relative position information of causality words in medical texts is represented as a relational feature layer embedded in a pre-trained language model,and the baseline model is integrated for enti-ty recognition and relationship extraction.Result/Conclusion The F1 value of the model embedded in the relational feature layer is im-proved by 2.92 percentage points and 6.41 percentage points compared with the baseline models BERT-BiLSTM-CRF and CasRel,re-spectively,with better causal prediction capacity.
关键词
自然语言处理/因果关系抽取/预训练模型/BERT/医疗文本
Key words
natural language processing/causality extraction/pre-training model/BERT/medical text