Multi-channel Enhanced Graph Convolutional Network for Aspect-based Sentiment Analysis
In traditional single-channel-based feature extraction approaches, features are captured solely based on dependency, while semantic similarity and dependency types between words are ignored. Although some success has been achieved through the graph convolutional network-based approach for sentiment analysis, aggregating both semantic information and syntactic structure features remains challenging, and the gradual loss of semantic features throughout the training process affects the final sentiment classification effect. To prevent the model from misinterpreting relevant sentiment words due to the absence of prior knowledge, the inclusion of external knowledge is recommended to enrich the text. Presently, how to utilize Graph Neural Networks(GNN) to fuse syntactic and semantic features still deserves further research. A multi-channel enhanced graph convolutional network model is proposed in this paper to address the above issues. First, graph convolution operations on syntactic graphs enhanced with sentiment knowledge and dependency types are performed to obtain two syntax-based representations, which are fused with the semantic representations learned through multi-head attention and graph convolution, so that the multi-channel features can be learned complementarily. It is observed from the experimental results that both the accuracy and macro F1 of our model surpass those of the benchmark model on five publicly available datasets. These findings indicate the importance of dependency types and affective knowledge to enhance syntactic graphs and highlight the effectiveness of combining semantic information with syntactic structure.