Abstract
With the rapid development of network technology and the automation process for 5G,cyber-attacks have become increasingly complex and threatening.In response to these threats,researchers have developed various network intrusion detection systems(NIDS)to monitor network traffic.However,the incessant emergence of new attack techniques and the lack of system interpretability pose challenges to im-proving the detection performance of NIDS.To address these issues,this paper proposes a hybrid explainable neural network-based framework that improves both the interpretability of our model and the performance in detecting new attacks through the innovative application of the explainable artificial intelligence(XAI)method.We effectively introduce the Shapley additive explanations(SHAP)method to explain a light gra-dient boosting machine(LightGBM)model.Additionally,we propose an autoencoder long-term short-term memory(AE-LSTM)network to reconstruct SHAP values previously generated.Furthermore,we define a threshold based on reconstruction errors observed during the training phase.Any network flow that sur-passes the specified threshold is classified as an attack flow.This approach enhances the framework's ability to accurately identify attacks.We achieve an accuracy of 92.65%,a recall of 95.26%,a precision of 92.57%,and an F1-score of 93.90%on the dataset NSL-KDD.Experimental results demonstrate that our approach generates detection performance on par with state-of-the-art methods.
基金项目
Fundamental Research Funds for the Central Universities(x2wjD2230230)
Natural Science Foundation of Guangdong Province of China,CCF-Phytium Fund()
Cultivation of Shenzhen Excellent Technological and Innovative Talents(RCBS20200714114943014)
Basic Research of Shenzhen Science and Technology Plan(JCYJ20210324123802006)