Towards Rigorous Explanations for Neural Network Driven Intrusion Detection
With the pervasion of the artificial intelligence,neural network models have been adopted widely for decision making.However,the decision they made cannot be trusted because they cannot explain the reasons.Most of the current explain-able AI methods can only provide vague and ambiguous explanations,which cannot satisfy the requirement in security-concerned domains,such as cybersecurity,where the misinterpretation of even one bit can cause tremendous semantic misunderstand-ings.This paper proposes a rigorous explanation methodology,boundary input value(BIV)algorithm,based on knowledge com-pilation.Neural network models are presented into formal logic expressions,then prime implicants are extracted from them,which is the rigorous explanations for the model.Proposed BIV algorithm has been verified on a real-life DoS intrusion detection neural network model.Explanations extracted and time overhead are analyzed and compared with current explanation method SHAP.It is evident that the proposed BIV algorithm can provide rigorous explanations efficiently,and thus is scalable.