首页|融合注意力门控神经网络和Stacking算法的文本分类方法

融合注意力门控神经网络和Stacking算法的文本分类方法

扫码查看
为了更好地解决传统单分类器的泛化能力低和分类精度欠缺,并缺少对文本中关键信息的重点关注等问题,提出一种融合注意力门控神经网络(Attention Gated Neural Network,AGNN)和Stacking算法的文本分类模型AGNN-Stack-ing。在特征提取方面,AGNN模型采用分别引入注意力机制的BiLSTM与BiGRU模型的并行结构,对文本中关键特征提取更丰富;在文本分类方面,Stacking算法融合五种差异基分类器的优点,能提高文本分类的准确性,并且拥有较好的泛化能力与稳定性。在ChineseNLPCorpus数据集上,实验结果表明,该分类方法与机器学习算法和单一神经网络模型相比能有效地提高文本分类的精度。
Text Classification Method Combining Attention Gated Geural Network and Stacking Algorithm
In order to better solve the problems of low generalization ability and lack of classification accuracy of traditional single classifier,and lack of focus on the key information in the text,a text classification model AGNN-Stacking is proposed that combines attention gated neural network(AGNN)and stacking algorithm.In terms of feature extraction,the AGNN model adopts the parallel structure of BiLSTM and BiGRU models with attention mechanism respectively,which can extract the key features in the text more abundantly.In text classification,the Stacking algorithm combines the advantages of five different base classifiers to improve the accuracy of text classification,and has better generalization ability and stability.On the ChineseNLPCorpus dataset,the experimental results show that this classification method can effectively improve the accuracy of text classification compared with the machine learning algorithm and single neural network model.

single classifieraccuracygated neural networkfeature extractiongeneralization ability

邹旺、张吴波

展开 >

湖北汽车工业学院电气与信息工程学院 十堰 442000

单分类器 精度 门控神经网络 特征提取 泛化能力

2024

计算机与数字工程
中国船舶重工集团公司第七0九研究所

计算机与数字工程

CSTPCD
影响因子:0.355
ISSN:1672-9722
年,卷(期):2024.52(12)