计算机工程与设计2024,Vol.45Issue(12) :3772-3778.DOI:10.16208/j.issn1000-7024.2024.12.033

面向开源情报的信息抽取大语言模型

Large language models for open-source intelligence information extraction

赵勤博 王又辰 陈荣 宋颖毅 栾真 田夫兰
计算机工程与设计2024,Vol.45Issue(12) :3772-3778.DOI:10.16208/j.issn1000-7024.2024.12.033

面向开源情报的信息抽取大语言模型

Large language models for open-source intelligence information extraction

赵勤博 1王又辰 1陈荣 2宋颖毅 1栾真 1田夫兰1
扫码查看

作者信息

  • 1. 中国航天科工集团第二研究院七○六所,北京 100854
  • 2. 中共云南省委办公厅信息技术中心,云南 昆明 650228
  • 折叠

摘要

针对开源情报信息抽取过程依赖多类专用模型和抽取属性限制强等问题,基于一种GLM大语言模型进行指令微调和上下文学习提高信息抽取准确率,利用指令自动化生成方法对原始问题进行泛化,构建SFT数据集.开展多任务统一的微调学习常见抽取模式,通过自动思维链扩充提示增强模型推理能力.实验结果表明,该方法在开源情报命名实体识别、关系抽取和事件抽取任务上,微调模型能满足不同场景下的抽取要求,具有较好的抽取效果.

Abstract

To address the issues of the dependency on multiple specialized models and limitations on extraction attributes in the process of open source intelligence extraction,generative language model was adopted as an extraction tool and the accuracy of information extraction was improved through instruction fine-tuning and in-context learning.The SFT dataset was constructed using automated instruction generation methods to generalize the original problems.The fine-tuning was conducted for multiple tasks to learn common extraction patterns.The automatic thinking chain expansion prompts were employed to enhance the model's reasoning ability.Experimental results demonstrate that this method,in tasks such as named entity recognition,rela-tion extraction,and event extraction in open source intelligence,achieves satisfactory extraction results in various scenarios,indicating its effectiveness in extraction.

关键词

开源情报/大语言模型/信息抽取/指令自动化生成/指令微调/上下文学习/自动思维链

Key words

open source intelligence/large language model/information extraction/automatic instruction generation/instruction tuning/in-context learning/automatic chain-of-thought

引用本文复制引用

出版年

2024
计算机工程与设计
中国航天科工集团二院706所

计算机工程与设计

CSTPCD北大核心
影响因子:0.617
ISSN:1000-7024
段落导航相关论文