首页|基于知识提示微调的事件抽取方法

基于知识提示微调的事件抽取方法

扫码查看
事件抽取是信息抽取中的一个重要研究热点,旨在通过识别和分类事件触发词和论元,从文本中抽取出事件结构化信息.传统的方法依赖于复杂的下游网络,需要足够的训练数据,在数据稀缺的情况下表现不佳.现有研究利用提示学习,在事件抽取上取得一定的研究成果,但依赖手工构建,且只依靠预训练模型已有的知识,缺乏事件特有的知识.因此本文提出一种基于知识提示微调的事件抽取方法.该方法采用条件生成的方式,在现有预训练语言模型知识的基础上,注入事件信息以提供论元关系约束,并采用提示微调策略对提示进行优化.大量实验结果表明,相较于传统基线方法,该方法在触发词抽取上优于基线方法,并在小样本下达到最好的效果.
Knowledge Prompt Fine-tuning for Event Extraction
Event extraction is an important research focus in information extraction,which aims to extract event structured infor-mation from text by identifying and classifying event triggers and arguments.Traditional methods rely on complex downstream net-works,require sufficient training data,and perform poorly in situations where data is scarce.Existing research has achieved cer-tain results in event extraction using prompt learning,but it relies on manually constructed prompts and only relies on the exist-ing knowledge of pre-trained language models,lacking event specific knowledge.Therefore,a knowledge based fine-tuning event extraction method is proposed.This method adopts a conditional generation approach,injecting event information to pro-vide argument relationship constraints based on existing pre-trained language model knowledge,and optimizing prompts using a fine-tuning strategy.Numerous experiment results show that compared to traditional baseline methods,this method outperforms the baseline method in terms of trigger word extraction and achieves the best results in small samples.

event extractionprompt learninginformation extractionnatural language processingpre-trained language model

李璐、朱焱

展开 >

西南交通大学计算机与人工智能学院,四川 成都 610031

事件抽取 提示学习 信息抽取 自然语言处理 预训练语言模型

四川省科技计划项目

2019YFSY0032

2024

计算机与现代化
江西省计算机学会 江西省计算技术研究所

计算机与现代化

CSTPCD
影响因子:0.472
ISSN:1006-2475
年,卷(期):2024.(7)
  • 9