Solar flare forecasting model based on multi-modal feature fusion
This paper presents STCNet,a solar flare forecasting model based on multi-modal feature fusion.The model's inputs include magnetograms and corresponding magnetic field characteristic parameters.By leveraging the correlation information between these inputs and utilizing the complementary aspects of different modalities,STCNet aims to enhance the diversity and richness of training data.The Swin Transformer,with its hierarchical structure and efficient windowing strategy,is employed to process the magnetograms.This approach allows for efficient extraction of image features while reducing computational complexity.Additionally,a one-dimensional convolutional neural network is used to extract the magnetic field characteristic parameters.The feature vectors from both modalities are then fused and input into a fully connected layer for classification.In terms of key evaluation metrics,the STCNet model achieves an F1 score of 0.2342,a true skill statistic(TSS)value of 0.8251,a true positive rate of 0.9227,a false positive rate of 0.0976,an AUC value of 0.96,and an overall accuracy of 90.27%.These results indicate a high true positive rate and a low false positive rate for flare occurrence,demonstrating superior predictive performance compared to the single-modal Swin Transformer model and the Deep Residual Networks(ResNet)model using magnetograms as input.Additionally,STCNet outperforms the multi-modal ResNet model with the same inputs.Compared to existing studies,the STCNet model also shows remarkable performance in terms of the TSS value.
solar flare predictiondeep learningmuti-modalSwin Transformer model