Transformer在大规模数据集上取得了优异效果,但由于使用多头注意力使得模型过于复杂,且在小规模数据集上效果不理想。对于多头注意力替换的研究在图像处理领域已取得一些成果,但在自然语言处理领域还少有研究。为此,首先提出一种去注意力的多层语义感知机(multi-layer semantics perceptron,MSP)方法,其核心创新是使用token序列转换函数替换编码器中的多头注意力,降低模型复杂度,获得更好的语义表达;然后,提出一种动态深度控制框架(dynamic depth control framework,DDCF),优化模型深度,降低模型复杂度;最后,在MSP方法和DDCF的基础上,提出动态多层语义感知机(dynamic multi-layer semantics perceptron,DMSP)模型,在多种文本数据集上的对比实验结果表明,DMSP既能提升模型分类精度,又能有效降低模型复杂度,与Transformer比较,在模型深度相同的情况下,DMSP模型分类精度大幅提升,同时模型的参数量大幅降低。
A dynamic multi-layer semantics perceptron without attention mechanism
Transformer has achieved excellent results on large-scale data sets,but it is too complex due to utilizing Multi Head Attention(MHA),and its performance is poor on small-scale data sets.The study on the replacement of MHA is little in the field of natural language processing,although it has made great achievements in the field of image processing.Therefore,a method called multi-layer semantics perceptron(MSP)is proposed.Its major innovation is that instead of MHA,a simple token sequence transformation function is used,thus achieving a better semantic feature representation with lower complexity.Additionally,a dynamic depth control framework(DDCF)is proposed,which is able to optimize the depth of neural networks automatically,as a result the complexity of the model is reduced markedly.Finally,based on the MSP and the DDCF,the dynamic multi-layer semantics perceptron model(DMSP)is proposed.Compared with the Transformer model with same depth,the experimental results on multi-data sets show that the DMSP model achieves better performance significantly,meanwhile,its parameters declines sharply.