Relation extraction is a basic and important task that aims to extract the relations between entities from unstructured text.Recent developments show that Large-Language Model(LLM)and basic models can improve the performance of several Natural Language Processing(NLP)tasks.These models utilize the language-representation ability of deep-learning and pre-training models and can automatically learn the semantic features of relations.A method to effectively use of a large model for solving the problems of entity overlap and unsatisfactory information exchange is yet to be revealed.Hence,a relational-extraction model based on large language is proposed.First,the Large-Language model Meta AI(LLaMA)is adapted to the task in this study via fine-tuning.To extract relations,the self-attention mechanism is used to enhance the correlation between entity pairs and information sharing between entities.Subsequently,average pooling is performed to generalize an entire sentence.A filtering matrix is designed for entity pairs,part-of-speech information is introduced to enhance semantics,and invalid triples are filtered out based on the relevance of entity pairs in the filtering matrix.Experimental results show that the F1 value results of the proposed model on the New York Times(NYT)and WebNLG open datasets are 93.1%and 90.4%,respectively.In the case where the LLaMA model becomes an encoder after fine-tuning,the proposed algorithm is superior to the baseline model in terms of accuracy and the Fl value index,thus verifying its effectiveness.
relation extractionartificial intelligenceattention mechanismLarge-Language Model(LLM)part of speech