Transformer Fault Diagnosis Method Based on Self-attention Mechanism and 1D-CNN
Objective Transformers are important equipment in the power system.Effective identification of fault categories when transformers fail can improve the efficiency of power maintenance,which is of great significance for the safe operation of the power grid.In response to low accuracy in transformer fault identification in power grid maintenance,this paper proposed a transformer fault diagnosis method based on a self-attention mechanism and 1D-CNN.Conventional convolution often loses feature information when processing DGA gas sample data,resulting in low accuracy in fault diagnosis.By combining the self-attention mechanism with 1D-CNN,this paper effectively addressed the above issues,improving the accuracy and reliability of transformer fault diagnosis.Methods To reduce the loss of feature extraction information during inter-layer propagation in convolutional networks,this paper replaced the ReLU activation function in the original model with the LeakyReLU function.Compared with ReLU activation,where many neurons are not activated,LeakyReLU can reduce the sparsity of the model and increase the diversity of network feature information.The self-attention mechanism can weight the feature information of dissolved gas data in transformer oil,realizing effective feature information enhancement.A dynamic decay learning rate strategy was used to optimize the optimizer.Results The proposed method reduced the loss rate to 0.078,a decrease of 44.7%and 38.6%compared with models without dynamic decay learning rate and ReLU activation,respectively.The diagnostic accuracy can reach 93.79%,an increase of 0.36%and 2.12%compared with 1D-CNN and GOA-BP methods,respectively.Conclusion Case study simulations validate the effectiveness and superiority of the proposed method,demonstrating that the transformer fault diagnosis method based on the self-attention mechanism and 1D-CNN can effectively improve diagnostic accuracy and reduce model loss.