Mutual Learning Prototype Network for Few-Shot Text Classification
Existing methods for few-shot text classification usually rely only on a single prototype for training and inference,which is susceptible to noise and other factors,resulting in insufficient generalization ability.In response to this,a Mutual Learning-Prototype Network for few-shot text classification is proposed.On the basis of retaining the existing algorithm to compute the prototype directly by text embedding features,this paper introduces the BERT network.which inputs the text embedding features into BERT to generate a new prototype.Then,using the mutual learning algorithm,the two prototypes are mutually constrained and knowledge is exchanged to filter out the inaccurate semantic information.This process aims to enhance the feature extraction capability of the model and improve the classification accuracy by joint decision making of the two prototypes.The effectiveness of our proposed approach has been confirmed by the experimental results on a few-shot text classification dataset.The results demonstrate that,on the FewRel dataset,our method improves the accuracy by 2.97%in the 5-way 1-shot experiment compared to the current optimal method,and by 1.99%in the 5-way 5-shot experiment.