首页|面向小样本文本分类的互学习原型网络

面向小样本文本分类的互学习原型网络

扫码查看
小样本文本分类方法大多依赖于单一原型进行训练和推理,容易受到噪声等因素的影响,从而导致泛化能力不足.对此,提出了一种用于小样本文本分类的互学习原型网络.在保留现有算法通过文本嵌入特征直接计算原型的基础上,引入了基于转换器的双向编码表征模型,将文本嵌入特征输入模型中以生成新的原型;然后,利用互学习算法使这2个原型相互约束并进行知识交换,以过滤掉不准确的语义信息.此过程旨在提升模型的特征提取能力,并通过2个原型的共同决策来提高分类精度.在小样本文本分类数据集上的实验结果证实了所提方法的有效性.实验结果表明,在FewRel小样本关系分类数据集上,所提方法在类别为5且样本为1的分类实验中较当前最优方法的精度提高了 2.97%,较类别为5且样本为5的分类实验中精度提高了 1.99%.
Mutual Learning Prototype Network for Few-Shot Text Classification
Existing methods for few-shot text classification usually rely only on a single prototype for training and inference,which is susceptible to noise and other factors,resulting in insufficient generalization ability.In response to this,a Mutual Learning-Prototype Network for few-shot text classification is proposed.On the basis of retaining the existing algorithm to compute the prototype directly by text embedding features,this paper introduces the BERT network.which inputs the text embedding features into BERT to generate a new prototype.Then,using the mutual learning algorithm,the two prototypes are mutually constrained and knowledge is exchanged to filter out the inaccurate semantic information.This process aims to enhance the feature extraction capability of the model and improve the classification accuracy by joint decision making of the two prototypes.The effectiveness of our proposed approach has been confirmed by the experimental results on a few-shot text classification dataset.The results demonstrate that,on the FewRel dataset,our method improves the accuracy by 2.97%in the 5-way 1-shot experiment compared to the current optimal method,and by 1.99%in the 5-way 5-shot experiment.

artificial intelligencetext classificationfew-shot learningmutual learningprototype network

刘俊、秦晓瑞、陶剑、董洪飞、李晓旭

展开 >

中国航空综合技术研究所,北京 100028

兰州理工大学计算机与通信学院,兰州 730050

人工智能 文本分类 小样本学习 互学习 原型网络

国家自然科学基金项目军委装备发展部技术基础项目

62176110221ZHK11015

2024

北京邮电大学学报
北京邮电大学

北京邮电大学学报

CSTPCD北大核心
影响因子:0.592
ISSN:1007-5321
年,卷(期):2024.47(3)
  • 13