Dual channel text classification model based on feature reuse
Although existing structures similar to the combination of convolutional neural networks(CNN)and recurrent neural network(RNN)have been widely used in the field of text classification,most text classification models based on CNN-RNN adopt single channel mode,which greatly limits the ability of the model to extract text features.Therefore,a dual channel text classification model based on feature reuse is proposed.Firstly,the model uses long short-term memory(LSTM)neural networks and gate recurrent unit(GRU)to extract context semantic information of text in RNN channel,and CNN channel to extract local features of text.Secondly,the attention mechanism is introduced in the two channels respectively,which makes the model focus on the keywords in the text accurately.In addition,the model is improved in the RNN channel to realize the reuse of the original features and further promote the ability of the model to extract global features.The evaluation results on the THUCNews dataset show that the classification accuracy of the proposed model can reach 96.61%and achieve better classification results.
text classificationattentional mechanismdual channelfeature extractionfeature reuse