中国科学:信息科学(英文版)2024,Vol.67Issue(7) :68-86.DOI:10.1007/s11432-023-4067-x

HEN:a novel hybrid explainable neural network based framework for robust network intrusion detection

Wei WEI Sijin CHEN Cen CHEN Heshi WANG Jing LIU Zhongyao CHENG Xiaofeng ZOU
中国科学:信息科学(英文版)2024,Vol.67Issue(7) :68-86.DOI:10.1007/s11432-023-4067-x

HEN:a novel hybrid explainable neural network based framework for robust network intrusion detection

Wei WEI 1Sijin CHEN 2Cen CHEN 3Heshi WANG 4Jing LIU 2Zhongyao CHENG 5Xiaofeng ZOU6
扫码查看

作者信息

  • 1. School of Computer Science and Engineering,Xi'an University of Technology,Shaanxi Key Laboratory for Network Computing and Security Technology,Xi'an 710048,China
  • 2. School of Computer Science and Technology,Wuhan University of Science and Technology,Wuhan 430065,China
  • 3. School of Future Technology,South China University of Technology,Guangzhou 510641,China;Shenzhen Research Institute of Hunan University,Shenzhen 518052,China
  • 4. School of Computer Science,Hunan University of Technology and Business,Changsha 410205,China
  • 5. Institute for Infocomm Research(I2R),Agency for Science,Technology and Reseach(A*STAR),Singapore 138632,Singapore
  • 6. School of Future Technology,South China University of Technology,Guangzhou 510641,China
  • 折叠

Abstract

With the rapid development of network technology and the automation process for 5G,cyber-attacks have become increasingly complex and threatening.In response to these threats,researchers have developed various network intrusion detection systems(NIDS)to monitor network traffic.However,the incessant emergence of new attack techniques and the lack of system interpretability pose challenges to im-proving the detection performance of NIDS.To address these issues,this paper proposes a hybrid explainable neural network-based framework that improves both the interpretability of our model and the performance in detecting new attacks through the innovative application of the explainable artificial intelligence(XAI)method.We effectively introduce the Shapley additive explanations(SHAP)method to explain a light gra-dient boosting machine(LightGBM)model.Additionally,we propose an autoencoder long-term short-term memory(AE-LSTM)network to reconstruct SHAP values previously generated.Furthermore,we define a threshold based on reconstruction errors observed during the training phase.Any network flow that sur-passes the specified threshold is classified as an attack flow.This approach enhances the framework's ability to accurately identify attacks.We achieve an accuracy of 92.65%,a recall of 95.26%,a precision of 92.57%,and an F1-score of 93.90%on the dataset NSL-KDD.Experimental results demonstrate that our approach generates detection performance on par with state-of-the-art methods.

Key words

explainable artificial intelligence/light gradient boosting machine/machine learning/network intrusion detection/Shapley additive explanation/hybrid explainable neural network(HEN)

引用本文复制引用

基金项目

Fundamental Research Funds for the Central Universities(x2wjD2230230)

Natural Science Foundation of Guangdong Province of China,CCF-Phytium Fund()

Cultivation of Shenzhen Excellent Technological and Innovative Talents(RCBS20200714114943014)

Basic Research of Shenzhen Science and Technology Plan(JCYJ20210324123802006)

出版年

2024
中国科学:信息科学(英文版)
中国科学院

中国科学:信息科学(英文版)

CSTPCDEI
影响因子:0.715
ISSN:1674-733X
段落导航相关论文